Forums

Concurrent requests oddity(?)

I am probably being a fool here, it's 0035 local time and my brain isn't exactly running at maximum throughput, but I thought it best to ask just in case.

In the initial build of my web app, I encountered a situation where I needed to retrieve data from an external service which in turn needed to retrieve a file from my app (file served via the standard static handling process).

Yes, I know it's strange, but it's a stand-in allowing me to move forward with other things and then drop in a proper implementation when it's ready.

So, we have something like this:

def weird_view(request):
    result = urllib2.urlopen("https://some.webservice.com/?url=http://mywebapp.pythonanywhere.com/path/to/file.ext")
    ...
    return HttpResponse(something)

When this view is called via uWSGI, the server process hangs until the external service times out and aborts the request. If I change the first line to:

result = urllib2.urlopen("http://mywebapp.pythonanywhere.com/url/goes/here")

The server process hangs completely, requiring a reload of the app to come back! In either case, if I call the view from a Python console it doesn't hang and I get my HttpResponse object like I should.

I then pointed a domain at my app using the standard instructions, which allowed everything to work fine. One request (the initial to the Django view) goes to the domain, and the secondary urlopen request goes to the PythonAnywhere subdomain.

While performing HTTP requests to your own site from within a view is icky and deserving of a DailyWTF post, uWSGI should be able to handle cases like this - it can handle concurrent requests, can't it?

re: concurrency, it depends on the number of workers you have for your web apps. Free accounts only have 1, so they can't do concurrent requests. Paid accounts have multiple workers, so they should be fine.

I took a look at your account, it looks like your own-domain has 3 workers, but your old webapp at serenity.pythonanywhere.com only had 1 for some reason. sorry about that, that was a mistake. i've set it back to 3. it will spin up 3 workers next time you hit "reload" on the web tab.

PS - try requests instead of urllib2? it has some sensible behaviours in terms of timeouts etc...

Aha! That's solved it - it now works the 'old' way from the one domain too. The lack of workers must have been the problem. I discounted this as the cause after upgrading my plan - I should have looked at the server logs which clearly show that only one worker process was being spawned! I said that I was probably missing something obvious due to late-night coding.

requests/urllib3 was next on my list of things to try - I shall do so.