Variation in response time.

I have a simple Django app that receives user input from a form, performs some image processing and returns an image. On my local machine response time is consistent at around 2.6 seconds. I didn't expect response time on pythonanywhere to match this, but I did expect that performance would be consistent.

Response time is varying from a little over 3 seconds to over 30 seconds. enter image description here

Each request in the screenshot sends the same data, and waits until the previous request is complete before sending, so this is not an issue with busy workers.

Obviously, I would like to have the response time consistently as low as possible. Is there anything I can do on my end to improve the performance?

I did read something in a previous topic about web apps competing for resources on the server, and that heavy processing belongs in a console or scheduled tasks. Can I expect better performance if I use scheduled tasks to run an async task queue?

Thank you for your help!

There is always going to be variation in response times on shared hosting. Your laptop is essentially sitting doing nothing most of the time and all your storage is local, which is why your response times are consistent there

Scheduled tasks are unlikely to help. They're more for doing work that would make a request take way too long. Whereas yours are generally ok.

I would suggest working out where your view is spending it's time. When you know that, you may find that you can optimise it to produce more consistent return times.

Thanks Glenn. Just to make sure I fully understand what is happening - when I am experiencing a response time of under four seconds it's because none of the other web apps on the same server are getting requests, so my code is able to run in one go. When I am experiencing longer response times it's because other web apps are getting requests, so the server has to pause execution of my code to deal with the other requests. The only thing I can do about this is optimise my view so that the server is more likely to complete the response before it needs to pause and deal with other requests.

It follows that I need to profile the view locally rather than on pythonanywhere, so that I'm seeing where the view really spends its time and not where it has been paused.

Have I got all that right?

It might be a good idea to do the profiling both locally and on PythonAnywhere. This is because the resource contention might be in something that doesn't affect you locally, but does here.

For example, imagine that it's contention for disk access that's causing the problem. Your local disk is likely to be really quick, so if you only profiled locally, you might find that the 2.6 seconds is split:

  • 0.1 seconds disk IO
  • 2.5 seconds CPU

If you only profiled locally, that data would lead you to spend lots of time and effort optimising the CPU time. Let's say you got it down to half a second, so now your local processing was taking 0.6 seconds total, 0.1 seconds for the disk access and 0.5 for the CPU stuff.

But perhaps the timing when it took 30 seconds on PythonAnywhere was actually originally:

  • 27.5 seconds disk IO
  • 2.5 seconds CPU

That is, the actual calculations were just as fast as on your laptop, but the disk access across the network was super-slow for some reason. All of the work you'd put into optimisation would be pretty much wasted, because those 30-second hits would still be taking 28 seconds, 0.5 seconds for your optimised CPU code but still 27.5 seconds for the disk IO.

By adding logging to work out which bits are showing the biggest variation from your local timings on the live site, you'll be better positioned to work out where to optimise stuff.

Hope that makes sense!

Thank you Giles, that was a lot of help! I made some optimisations and the longest response I have experienced so far is under 7 seconds, with most under 3 seconds and many under 1 second.

Excellent! That's great to know :-) Where did the bottleneck turn out to be?

Is the so-called "response time" the total running time on the python server?

Yes, that's right.