Forums

My mobile app doesn't work anymore (Code 499)

Hi guys. I have a flask REST API deployed here which I use to process data coming from an android app and send them to a Mongo Database. No problems for the last 10 days. Everything worked like a charm. Btw, today I noticed a lot of 499 codes in the log, and some people told me that the android app is frozen on the "loading page". On this page, the app basically sends a GET request to recover user data from the database. Now, I really don't know how to debug this. I don't where the problem is. In the PythonAnywhere servers? Or maybe in the Mongo service? Or maybe on the client side.

Have you checked your app's error logs already? You'll find links to them in the "Log files" section of the Web page.

Hi, I checked the Access Log. That's why I'm concerned with the 499 error message, which I'm seeing more often now, both for GET and POST/PUT requests. Btw, I also noticed some weird messages in the Server Log, which I don't know how to interpret. Can you help? I really need to figure out where the problem is. If it is a problem of capacity, I have to know it quickly and if I can solve it by updating my account to a more expensive package I'll do it.

This is what I see now in the Server Log. And right now the app doesn't work. A lot of 499 in the Access Log.

Server Log stuff

Image here https://ibb.co/0q0FyT3

Do you mean the "HARAKIRI" message in the server log? We've got a help page for that: https://help.pythonanywhere.com/pages/502BadGateway/ -- it probably means that there was a request that took too long to be processed (over 5 min) and the process was killed. Actually it matches the errors from the error logs (they already rotated so you need to look in older ones in '/var/log' directory) -- look for the endpoint in the error log that was hit when HARAKIRI occured and try to debug that error.

Ok, this is fine, and confirms my hypothesis. Now I need solutions to the problem. Do I need to upgrade my package? If you can't manage the problem I have to switch to a different hosting service. Please let me know, it's important.

There are dozens of 499, Harakiri, and the app is frozen right now. Maybe there are too much data to process?

If the requests take longer than 5 minutes to process, more workers would not help -- you should investigate those pymongo.errors.DocumentTooLarge errors and what is causing the request to take so long. You could add more logging to narrow the investigation, etc.

Ok, so it is almost sure I'm having issues with the database. This is the worst thing it could have happened. =D =D

If the slowness is related to some heavy processing or a long query, you could try this trick: https://help.pythonanywhere.com/pages/AsyncInWebApps/.

Ok, so the problem is on the database side. Apparently, I exceeded the bandwidth limit, that's why the API is too slow in querying. I think the way the API works is not efficient. Indeed, I have huge data blobs which are just for data analysis and are not useful to send back in-app information of the user. I should opt for non-queryable database solution to store those blobs. I don't know whether Amazon or Google storage might work. In general, I should not have any problems accessing other databases since I have a paid account. Am I right? Btw, do you have any suggestions? I know this is outside the forum's scope :) :)

That is correct - since you have a paid account, you will be able to access other databases. Be aware that pulling large amounts of data from external databases will be slower than you expect because of the network overhead, so make sure that you're filtering at the database instead of in your Python code.

What do you mean by "filtering at the database instead of in your Python code"? Btw, what if I use MySQL included here. Maybe I can find some way to organize the json to fit the tabular model. Do I have the same problems with large data per subject? We are talking about a few megabytes in any case.

I mean that if you select all (or a large subset of) rows from the database and then decide which ones you want in your Python code, all of those records need to be transported across the network to where your code is running and that will take time. Yes, that also applies if individual database records are megabytes in size - that data has to move across the network.

Oh ok clear. With Mongo I have to retrieve the whole the entire user document before manipulating it, as far as I know. But I was considering using both Mongo and your MySQL. What if I use MySQL for storing the blobs? The blobs in my case would be just really huge strings (converted strings from a json containing x/y trajectories) up to 100.000 characters. I can store each string in a table row. Now, I have two questions: Is it problematic to call both Mongo and MySQL within the same API? Are there limitations on the maximum number of characters a string stored in a SQL database cell can have?

Calling 2 different services in a single request is fine, except that you'd have the latency of both in the time for the request. The maximum size you can have for a blob on PythonAnywhere would be 16M, but that is unlikely to perform well if you have to pull the whole thing across the network.

In SQL I would have each cell dedicated to an (up to) 100.000 characters string, of size 100 Kb approximately. I think it would not be a problem right? I also refer to problems in latency and storing time. This is probably something I will figure out by testing. Now I'm concerned with the total number of rows the database will have. Each individual will produce up to 400 strings (about 100.000 characters) which I will store separately in every single cell of the database. Hope there are no limitations in this sense. However, such information won't be retrieved from the API. Let's try. Thanks :)

That should all be fine -- clearly, the more data you store, and the more that you pull out of the database for each request, the slower things will be, but moving a few 100KiB over the network won't take a large amount of time. The only thing I can think of that might cause problems would be if you were doing queries that filtered on the large fields, as indexing wouldn't work so well for them. But if your queries are only filtering on the normal-sized ones, I don't anticipate any problems.

Ok, fine. Btw, how much inbound/outbound usage capacity do I have with my account?

There's no fixed amount, but if you use a very large amount (dozens of GiB/month) we will get in touch.

ahah ok, last question. I see there is 73% CPU usage. Does it depend on the data flow in the network? Even if I solve the database issue I might not have solved the CPU issue here. So maybe my app will slow down again. If this is the case, how can I have more CPU?

You can customize your account features on the "Account" page. CPU usage is CPU usage, network traffic is as important as it makes the CPU do things.