Forums

Mysterious OpenSSL.SSL.SysCallError when running script on Python Anywhere

Hi, 75% of the time when my python script runs on the python anywhere server I get the exceptions below. This has never happened when I run locally, but I am really at a loss why it is occurring on the server. Can somebody shed some light here?

Traceback (most recent call last):
  File "/usr/lib/python3.7/site-packages/urllib3/contrib/pyopenssl.py", line 444, in wrap_socket
    cnx.do_handshake()
  File "/usr/lib/python3.7/site-packages/OpenSSL/SSL.py", line 1907, in do_handshake
    self._raise_ssl_error(self._ssl, result)
  File "/usr/lib/python3.7/site-packages/OpenSSL/SSL.py", line 1631, in _raise_ssl_error
    raise SysCallError(errno, errorcode.get(errno))
OpenSSL.SSL.SysCallError: (110, 'ETIMEDOUT')

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/lib/python3.7/site-packages/urllib3/connectionpool.py", line 600, in urlopen
    chunked=chunked)
  File "/usr/lib/python3.7/site-packages/urllib3/connectionpool.py", line 343, in _make_request
    self._validate_conn(conn)
  File "/usr/lib/python3.7/site-packages/urllib3/connectionpool.py", line 849, in _validate_conn
    conn.connect()
  File "/usr/lib/python3.7/site-packages/urllib3/connection.py", line 356, in connect
    ssl_context=context)
  File "/usr/lib/python3.7/site-packages/urllib3/util/ssl_.py", line 359, in ssl_wrap_socket
    return context.wrap_socket(sock, server_hostname=server_hostname)
  File "/usr/lib/python3.7/site-packages/urllib3/contrib/pyopenssl.py", line 450, in wrap_socket
    raise ssl.SSLError('bad handshake: %r' % e)
ssl.SSLError: ("bad handshake: SysCallError(110, 'ETIMEDOUT')",)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/lib/python3.7/site-packages/requests/adapters.py", line 445, in send
    timeout=timeout
  File "/usr/lib/python3.7/site-packages/urllib3/connectionpool.py", line 638, in urlopen
    _stacktrace=sys.exc_info()[2])
  File "/usr/lib/python3.7/site-packages/urllib3/util/retry.py", line 398, in increment
    raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='api.tdameritrade.com', port=443): Max retries exceeded with url: /v1/marketdata/NFLX/quotes?apikey=ALEXRWILLIAM (Caused by SSLError(SSLError("bad handshake: SysCallError(110, 'ETIMEDOUT')")))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/alexrwilliam/Ameritrade.py", line 161, in <module>
    DATA = DATA.append(pd.DataFrame(get_quotes('NFLX')['NFLX'],index=[0]))
  File "/home/alexrwilliam/Ameritrade.py", line 142, in get_quotes
    quote = request_data(ticker_url)
  File "/home/alexrwilliam/Ameritrade.py", line 124, in request_data
    response = requests.get(url, headers=headers, params=params)
  File "/usr/lib/python3.7/site-packages/requests/api.py", line 72, in get
    return request('get', url, params=params, **kwargs)
  File "/usr/lib/python3.7/site-packages/requests/api.py", line 58, in request
    return session.request(method=method, url=url, **kwargs)
  File "/usr/lib/python3.7/site-packages/requests/sessions.py", line 512, in request
    resp = self.send(prep, **send_kwargs)
  File "/usr/lib/python3.7/site-packages/requests/sessions.py", line 622, in send
    r = adapter.send(request, **kwargs)
  File "/usr/lib/python3.7/site-packages/requests/adapters.py", line 511, in send
    raise SSLError(e, request=request)
requests.exceptions.SSLError: HTTPSConnectionPool(host='api.tdameritrade.com', port=443): Max retries exceeded with url: /v1/marketdata/NFLX/quotes?apikey=ALEXRWILLIAM (Caused by SSLError(SSLError("bad handshake: SysCallError(110, 'ETIMEDOUT')")))

2019-07-10 13:52:31 -- Completed task, took 1312.00 seconds, return code was 1.

[edit by admin: formatting]

That's certainly very strange -- it looks like a low-level networking error when connecting to the TD Ameritrade API, which is certainly a service you'd expect to be solid and reliable. Some servers do block cloud services like us from accessing them, but if that were the case here then I'd expect it to be failing all of the time, not sporadically like this. You're on our most recent system image, so you have an up-to-date set of OS libraries, so it's unlikely to be a bug there.

The only thing that comes to mind beyond all of those is that they might be temporarily blocking you as some kind of rate-limiting process. How many queries are you making per second?

Thanks for your response!

Well there rate limit is 1 request per 500 ms and thats what I am doing. If I run the script locally, this exception never occurs though, so I would imagine on the server there shouldn't be a difference. Is there any reason why the rate limit may be likelier to kick in when run from the server?

I could imagine if someone else is also accessing the same api on our platform, then you may be sharing the same total allowed request number. Also it could be that the code is being run fast on our platform (if you don't have explicit waits in between). Or it could be like Giles said that they are being more stringent with cloud services.

Hi, unfortunately I am still getting this issue, and now its more around 75% of the days I am trying to run the script. I saw that I am not hitting the 500ms rate limit. I am also not sure if Ameritrade would be limiting based off of IP address since i am using my personal API Key (I figured the limits would be set for a specific user not an IP address?) It seems there is no solution for this really, other than running it locally or trying another server?

Hmm, just to clarify, are you running at exactly 1 request per 500ms? Have you tried dialing that back down to say 50ms?

Is the way you are running this as a scheduled task? When you say that 75% of the time it fails, do you mean that every hour/day etc, it is run once, and it fails 75% of each script run? Or do you mean that each individual request fails 75% of the time?

Also do you do any concurrent/parallel processing?

Hi I am running at 600ms with an explicit wait in between each request.

I mean that each day that the scheduled task runs, in 75% of those days the SSL error occurs. Now I have integrated exception handling to simply wait for a second or 2 everytime this error occurs and retry, however, often it can take 20+ minutes for it to begin working again.

I am not doing any parallel processing. When i look at running tasks though, there are two processes listed. Not sure if this is normal or not:

bash -l -c cd /home/alexrwilliam/Stock_scraper/ && python3 Ameritrade.py AND python3 Ameritrade.py

is it possible that Ameritrade.py is running twice?

It is possible for it to be running twice. If one instance takes so long that it is still running when the next one starts, they would both end up running at the same time. You can check what processes your Tasks are running using the "Fetch process list" button on the Tasks page.

both processes i mentioned are referring to the same script Ameritrade.py. The script is coded to run and end at the same time each day, so there would be no way for the script to overlap with itself if run in parallel.

So I currently only have task set up with the command set up for: "bash -l -c cd /home/alexrwilliam/Stock_scraper/ && python3 Ameritrade.py"

So I am not sure where the second process "python3 Ameritrade.py" is coming from. I don't have this set up as a task anymore (I used to though). Is it possible the task is running on accident and wasn't removed from my tasks even though I can't see it anymore?

I can't think of any way that could happen, and when I checked just now I couldn't see any processes running at all (which makes sense if your script is only meant to run at the scheduled time and then exit at some point during the day. I'll take another look at what is running shortly after the task is scheduled to run, and will post here again then.

Hi now the script has started. So you can see two processes are fetched.

That's not duplicated processes, the one that starts with "bash" is launching the other one. The one that starts "python" is the actual process that is running.