Monitor Processor Use

Is there a way to monitor how much processor power my web application is using? The app often hangs and throws out a 500 internal error. I can't tell if it's a hidden bug in my code or if I'm hitting against whatever slice of the CPU I am allocated -- although i haven't had any issues yet when i run the app locally.

That would also be helpful to start getting an idea of my need for when/if you narrow down the different thresholds for the various plans.

Thanks! Ludovic

I think this is the nature of virtualized servers in general. We are at the whim of amazon or whoever happens to be the cloud provider when it comes to resource allocation. My guess is that it has to do with resource congestion in terms of how many other people are on the same physical machine.

Is there nothing in your error log to help with the diagnosis? Also, it should be quite difficult to hit the limits on a web app. Are you starting a number of threads in your code? What framework are you using? Perhaps the framework does stuff that bumps up against the limits and we need to configure it better or bump the limits to accommodate it.

I don't see any new entries in the error log -- my requests just seem to time out. I didn't think what i was doing was particularly power hungry, but thought maybe the 'low' cpu allowance was very low.

Playing around with the code a bit more, the culprit seems to be passing around one of my objects from the application to the template. I am using flask which i guess does a open a number of threads behind the scene; I have tried passing the object as argument to render_template() as well as saving it as a global variable (flask.g). Both avenues seem to dramatically slow down the application to the point where i get a 500 internal error. I wouldn't be surprised if it turned out my issue had nothing to do with paw and was just bad design on my part, but what really puzzles me is why would the app be so much faster when I run it locally? Where could there be a bottleneck on paw that wouldn't be an issue on my machine?

Also, I am not sure if this is relevant but the object I am trying to pass around is a collection of instances of classes created using sql alchemy. Would that be a particularly onerous task?

Could it be that the SQLAlchemy objects are making late hits against the database that returns lots of data? On PythonAnywhere, the database is across a network. If the database is on the same machine when you run it locally, the cost of transfering a lot of data across the network could account for the difference.

Im using sqlite (on a shared dropbox folder), so wouldn't think database hits are any slower?

Ah. OK. Our user storage also goes across the network (that's how we make sure your files stay in sync across the different machines) If there are lots of random access hits against the file system, it may be the cause of the problem. I think we'll have to look at the performance of the file system.

Out of curiosity, does any other user report similar issue? I am not quite sure how to get around it to be honest

We haven't heard from anyone else, but that doesn't necessarily mean that no-one else is seeing it. I'll do some experiments to see where the issue may be.

great thanks i will keep looking in my code in case it's something else.

So I have narrowed the issue down to (at least) a function which looks very innocuous to me: a db query and some simple arithmetic on the result. i've timed it on my machine vs. paw:

Local: CPU times: user 0.27 s, sys: 0.00 s, total: 0.27 s Wall time: 0.27 s

PAW CPU times: user 0.35 s, sys: 0.27 s, total: 0.62 s
Wall time: 8.09 s

I would expect things to be a bit slower of course for all the reasons we've discussed above, and the total cpu time reflects that. However, there seems to be something else going on that just blows up wall time which explains why the web app consistently times out.

I understand you're not here to debug my code, but I really think there is something non-trivial going on on your end? I've checked that we're using the same versions of python and flask.

i'm happy to discuss the code more specifically offline, if you think that would help.

Hi ludaavics,

Looks like something we should investigate. Do you mind emailing the bit of code to We do some testing and see if we have a bug.


Out of curiosity, is it possible that moving the SQLite DB off the Dropbox folder into the standard home directory might help?

I'm not sure what sort of locking primatives SQLite uses, but it wouldn't surprise me if the Dropbox synchronisation might be introducing some quirkiness?

Edit: May well be completely unrelated, but have you seen this blog post?

Thanks Cartroo, that's very interesting indeed. I think in this instance ludaavics is only reading so it probably doesn't apply but it's definitely something we need to be aware of.

Ahha, I hadn't realised that. I agree, it would be surprising if there were problems reading, unless it overlaps with a write elsewhere (I guess hypothetically this could cause problems if SQLite relies on locking primitives which aren't implemented).

For people who are planning to write to their SQLite DB's, I've just briefly skimmed the page on SQLite3 locking and it seems as if SQLite relies on both POSIX locks (i.e. fcntl()) and fsync() working as advertised. I know that at least some of these calls can be troublesome over NFS, so perhaps PA users should consider MySQL a more solid option (much as I have a soft spot for SQLite). Of course, it might be that in practice the chance of corruption is very slight, but it's worth bearing in mind for production sites.

I can't help but think there's got to be some clever on-disk version of RCU which avoids the need for locking entirely, and just relies on the atomicity and ordering of write() calls or something... That would make for an interesting project!