• by derefr on 8/31/2022, 12:31:13 AM

    You can really tell, in the comments sections of changes like these, who is speaking from the perspective of having a professional/business vs. a personal use-case.

    Individuals tend to be upset; while professionals are happy that individual free-riders will no longer be sucking up undue amounts of compute power, and so QoS on the system will improve for them.

  • by discordance on 8/31/2022, 12:00:37 AM

    Have been experimenting lately with GPUs off vast.ai. Has worked well for experiments with Stable Diffusion and is cheap!

    Any other suggestions for where to rent cheap GPUs? - i've heard about Hetzner (https://www.hetzner.com/sb?search=gpu), but they are 1080s.

  • by Aeolun on 8/31/2022, 12:40:39 AM

    Am I the only one that thinks it’s nice they’re being explicit about how much they’re giving you? I found the original ‘however much we have available and feel like giving to you’ plan limit highly unprofessional.

    I got an A100 after I susbscribed, so it worked out for me, but still annoying you don’t know what you’ll get.

  • by mark_l_watson on 8/31/2022, 2:54:19 AM

    I deeply appreciate Colab. I bought a nice home GPU rig a few years ago, but seldom use it. When I am lightly using Colab I use it for free and when I have more time for personal research the $10/month plan works really well. I can see occasionally paying for the $50/month plan as the need arises in the future.

    I am working on an AI book in Python. (I usually write about Lisp languages.) About half the examples will be Colab notebooks and half will be Python examples to be run on laptops.

    In any case, I like the soon to be implemented changes, sounds like a good idea to get credits and see a readout of usage and what you have left.

  • by goodfight on 8/30/2022, 11:29:43 PM

    Reeling us in with unlimited and locking it down. Classic

  • by frognumber on 8/31/2022, 12:48:50 AM

    I like the transparency, but this doesn't feel like the right way to do it. Computation should be free (or nearly free) if there's idle capacity, paid if Google is near capacity, and expensive/bidding if Google is above capacity.

    Flat compute units seem simple, but result in a lot of waste.

  • by fibrennan on 8/31/2022, 2:56:17 PM

    At Paperspace we've long offered an alternative to Google Colab that includes free CPU, GPU, and (recently released) IPU machines.

    Free notebooks can be run for 6 hours at a time.

    More info available in docs: https://docs.paperspace.com/gradient/machines/#free-machines...

  • by moconnor on 8/31/2022, 4:47:43 AM

    At last! I love Colab but the vague promises around availability and quota made it impossible to recommend for my team to use professionally.

    I even tried and failed to get it up and running with a Google cloud GPU recently, before just switching to Lambda which worked first time (but had since hit availability issues).

  • by stableskeptic on 8/31/2022, 4:30:09 PM

    Question for the Colab team:

    The restrictions listed at https://research.google.com/colaboratory/tos_v3.html differ slightly from the limits listed at https://research.google.com/colaboratory/faq.html specifically tos_v3.html does not mention these items from the faq

        * using a remote desktop or SSH
        * connecting to remote proxies
    
    I can appreciate why those were added - I've read posts and notebooks explaining how you can use ngrok or cloudflare to do those things in violation of the restrictions in the faq and clearly many people aren't using Colab as intended.

    Speaking as someone who has been playing around with the Colab free tier with the expectation of moving to a paid service once I know what I'm really doing, I'd like to know if it's likely these restrictions will be eased a bit with the move to a compute credit system.

    I'm still learning and haven't had a need to do those things yet but I believe remote ssh access would greatly simplify managing things. The Jupyter interface and integrated Colab debugger are good for experimenting but I'm worried that as I get closer to production I'll need a way to observe and change the state of long-running Colab processes the way I could with ssh, ansible or other existing tooling.

    Clearly I can build that myself or use something like Anvil Works https://anvil.works but that's time and effort I'd rather avoid if possible. So I'm hoping that the Colab team will ease the SSH restriction for people like me who want to use it for more traditional ops/monitoring of long running tasks.

    Do you anticipate any change or easing of the SSH restriction?

  • by etaioinshrdlu on 8/31/2022, 1:03:13 AM

    Lambdas labs has run out of GPUs to rent lately. I think it’s too many people running SD.

  • by roboy on 8/31/2022, 5:24:11 AM

    I really like the increase in transparency, I found it somewhat disturbing to pay for what feels like a random amount of stuff. How should I know if I need Pro or Pro+ if there is no estimate out there what either might get me. The update does not seem to change that though. I would love to have a distribution plotted of how much compute I might expect. Or at least Min/Average/Max run time until disconnect (rn. only Max is known).

  • by minimaxir on 8/31/2022, 3:28:27 AM

    From the Google Colab product lead:

    > This has been planned for months, it's laying the groundwork to give you more transparency in your compute consumption, which is hidden from users today.

    https://twitter.com/thechrisperry/status/1564806305893584896

  • by sabalaba on 8/31/2022, 12:36:10 AM

    For those affected and wanting to run your stable diffusion notebook more, you can always spin up a notebook on Lambda cloud with A100s for only $1.10/hr. PyTorch, TensorFlow, and Jupyter notebooks are pre-installed:

    https://lambdalabs.com/service/gpu-cloud

  • by endisneigh on 8/31/2022, 12:12:39 AM

    Pro tip: if it costs someone something, it’s not unlimited (this is true even if you’re paying a flat fee).

  • by frederickgeek8 on 8/31/2022, 12:01:03 AM

    I just subscribed to Colab Pro+ hours before this announcement (-‸ლ)

  • by TigeriusKirk on 8/30/2022, 11:21:21 PM

    Sigh. Unlikely these changes will be of benefit to us users.

  • by porker on 8/31/2022, 3:53:09 AM

    Good. Hopefully this will reduce the randomness of type-of-GPU assignment on the Pro plan.

    I fine-tuned a model on Colab Pro earlier this year and having to launch and quit 6 or 7 times to get a faster graphics card to ensure it completed within the time limit sucked.

    Hope this will give more transparency into whether you are assigned a whole card or a virtual slice of one. Something I could never work out before!

  • by rahidz on 8/30/2022, 11:19:33 PM

    Right when I started using it for StableDiffusion. Lovely.

  • by boredemployee on 8/30/2022, 11:21:41 PM

    For those looking for good alternatives, I recommend vast.ai