PythonAnywhere slow runtimes?

I've written a flask python application which takes approx 15 seconds to run on my laptop locally (base model 2021 MacBook Pro). Having uploaded this to PythonAnywhere (basic paid plan) I'm finding it takes approx 1 min 38 seconds to run.

Is this normal?

Sure. Your laptop is a very expensive machine that is basically doing nothing. Our machines are shared with other people and are pretty busy all the time.

Is it fair to say that PythonAnywhere is not really a commercial solution? Or could I pay to upgrade and improve speeds?

Upgrading to the web dev / startup accounts won't give you any speed increases. We do offer seperate machines for specific users and can provide information about that if you've like but they do cost alot more.

If you have a hard requirement on performance I'd maybe think about whether a shared hosting platform like PythonAnywhere is right for you. Maybe you'd be better served on with some self provisioned infrastructure on a platform like Digitalocean/AWS/GCP. There will obviously be increased cost to that as well as the massively increased time to setup. But for comparison, one of the smallest instance AWS offers (t3.micro) costs $6 a month and has 2 cpus that won't be anywhere near as quick as the +3Ghz processor in your mac.

Other than upgrading, I'd recommend looking for optimisations you might be able to do in your code. Are there computations that you might be able to do outside of a request? Are there ways in which you could make your db queries quicker.. things like that

Thanks for the response. The model is something I've been developing to supplement an educational website I've been working on. It ultimately produces balance sheet metrics for a life insurance company based on hypothetical portfolios of assets and liabilities.

Admittedly, I've got c. 10k assets and 10k liabilities being valued and re-valued under various different bases. I've done what I can to optimise (e.g. avoiding python loops at all costs and instead relying on numpy matrix operations), but I suppose one run of the model is still running a lot of calculations (albeit with each individual calculation being fairly basic). No DB queries, but data is stored in csv files (uploaded to PythonAnywhere along with my code).

I don't expect to get much traffic to the site, since it's pretty niche. I also don't expect to monetise it, so there are constraints on spending on computing power since it's just coming out of my pocket. That said, I'm happy to spend a bit since it's basically a hobby.

Perhaps if I can develop a status bar that updates the user with the progress of their run then the run times will be less of an issue since they will know that it's still running and they just need to be patient.

Maybe consider ?

Came across this post from Google as I am experiencing a similar issue - I've tried everything I can to eliminate inefficiencies and limit my queries but still find things are quite slow.

I have a rather large dataset I am querying on (100,000 - 500,000 rows), but it takes <1 second locally, and on PythonAnywhere it can take anywhere from a few seconds to over a minute (that it varies suggests to me that external factors might be at play). I don't use any for loops, I am leveraging indexing in the model, and I reduced the number of queries I am making on this dataset.

I am interested in learning more about the cost of a separate machine. Other threads have recommended reaching out but the only contact I have is support and when I sent an email to them I did not get a response back. Who can I get in touch with about this?

There are 2 other things that increase the time of queries: 1. This is shared infrastructure so, unlike your local machine, it is not sitting around doing nothing most of the time. It is working on code from other users. So it will be slower. 2. Network - your database is not on the same machine, so there is network latency involved. This will increase as the size of the result set that you queried increases.

Thanks Glenn, that is consistent with the other forum posts I have read so far, so my next question is: what can I do about it? If I want to explore the option of a separate machine, who can I get in touch with about that? Or should I move off of PythonAnywhere?

Re: your message to support, do you remember when you sent it? I just checked our archives and I don't see anything from your registered email address recently (last message was in June) and I don't see anything with your username.

Before heading in the direction of a private machine, it would be useful to know how large the dataset you're returning from your query is:

  • If it's a small number of rows, then the issue might be on the DB side (meaning that a private DB server could help).
  • If it's a larger number of rows, it might net network latency (which would be hard to address) or it could be processing time on the machine that is making the query, so that might be addressable by a private server on that side.

Thanks Giles, I sent it on October 17 the subject line was "I am interested in a separate machine, is that possible and what would it cost?"

Thanks for the questions though, this might help narrow things down. The dataset can vary; at most it might be around 100,000 rows which takes quite a bit of time (this does vary, sometimes it takes less time and sometimes it takes minutes to finish) but I have found even queries with 1,000 - 10,000 rows still take many seconds - minutes to finish.

All right, if you don't see clear correlation between number of rows and processing time, and also the similar queries vary, that may indicate a busy server issue. I was not able to find any matching email from you, unfortunately. If you're still interested in trying a private server, please re-send the email and we'll provide you with more detailed information.

Great thanks, I just sent a new email to the support email address. The subject line is "Interested in a private server"

We will handle it there.

I am experiencing the same issues described in the threads above for a very basic web application (<100 DB row interactions, only about 100 active users at any given time). I thought that increasing the number of "web workers" on my account might help crippling latency that a user experiences (sometimes to the point of a request timing out) during normal business hours (this doesn't appear to be as much of a problem at night in the United States). Will this increase of web workers help, or am I pursuing the wrong solution? Worth noting that I've been using PA for years without issue, and just recently this problem has arisen...

it is free service, we cannot complain. sometimes you have to wait 20-30sec to get the error.log updated and 5-10 sec to refresh the project, but it is free.

@teamchang thanks for the support! :-) That said, @pmmver is using our paid service.

@pmmver are you using SQLite by any chance? That's the most common cause of sudden slowdowns -- it performs pretty well with small amounts of data and not much concurrency, but speed can suddenly fall off a cliff if you exceed what it can handle in-memory and without hitting the filesystem to lock tables. If you are using it, switching to MySQL (or Postgres if you prefer) will probably sort everything out. If it's not that, then check out this help page, which has some useful hints and tips on tracking down website performance issues.

Hi, I'm running website which is having message automation. for the messages its using api, but I'm getting the messages too late, so is it because having slow bandwidth, as I'm using begineer's accoun ?

We would need to see more details about what is actually slow to help you. Maybe add some logging to surface that.