Forums

[Errno 101] Network is unreachable often occurred in long time scripts

My app calls wikipedia API frequently. It could work well until several weeks ago, but recently it often can't work well because of the error [Errno 101] Network is unreachable . First, I thought it's because of frequently API call, so I put sleep interval for the calls, however, that error still occurred. After that, I can restart the app soon, so it can call wikipedia API well at first, but dozens minutes later the err occurred again. A month ago, that problem haven't happened. Please help me to fix it.

Traceback (most recent call last):
  File "/usr/lib/python3.8/site-packages/urllib3/connection.py", line 158, in _new_conn
    conn = connection.create_connection(
  File "/usr/lib/python3.8/site-packages/urllib3/util/connection.py", line 80, in create_connection
    raise err
  File "/usr/lib/python3.8/site-packages/urllib3/util/connection.py", line 70, in create_connection
    sock.connect(sa)
OSError: [Errno 101] Network is unreachable

Traceback (most recent call last):
  File "/usr/lib/python3.8/site-packages/urllib3/connectionpool.py", line 597, in urlopen
    httplib_response = self._make_request(conn, method, url,
  File "/usr/lib/python3.8/site-packages/urllib3/connectionpool.py", line 343, in _make_request
    self._validate_conn(conn)
  File "/usr/lib/python3.8/site-packages/urllib3/connectionpool.py", line 839, in _validate_conn
    conn.connect()
  File "/usr/lib/python3.8/site-packages/urllib3/connection.py", line 301, in connect
    conn = self._new_conn()
  File "/usr/lib/python3.8/site-packages/urllib3/connection.py", line 167, in _new_conn
    raise NewConnectionError(
urllib3.exceptions.NewConnectionError: <urllib3.connection.VerifiedHTTPSConnection object at 0x7f30f4bebeb0>: Failed to establish a new connection: [Errno 101] Network is unr
eachable
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
  File "/usr/lib/python3.8/site-packages/requests/adapters.py", line 439, in send
    resp = conn.urlopen(
  File "/usr/lib/python3.8/site-packages/urllib3/connectionpool.py", line 637, in urlopen
    retries = retries.increment(method, url, error=e, _pool=self,
  File "/usr/lib/python3.8/site-packages/urllib3/util/retry.py", line 399, in increment
    raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='ja.wikipedia.org', port=443): Max retries exceeded with url: /w/api.php?action=query&format=json&titles=%E6%85%B6%
E9%87%8E%E6%9D%BE%E5%8E%9F&prop=coordinates&coprop=type%7Cname%7Cdim%7Ccountry%7Cregion&coprimary=all (Caused by NewConnectionError('<urllib3.connection.VerifiedHTTPSConnecti
on object at 0x7f30f4bebeb0>: Failed to establish a new connection: [Errno 101] Network is unreachable'))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
  File "csv2pop.py", line 1174, in <module>
    R = S.get(url=URLjp_api, params=PARAMS)
  File "/usr/lib/python3.8/site-packages/requests/sessions.py", line 546, in get
    return self.request('GET', url, **kwargs)
  File "/usr/lib/python3.8/site-packages/requests/sessions.py", line 533, in request
    resp = self.send(prep, **send_kwargs)
  File "/usr/lib/python3.8/site-packages/requests/sessions.py", line 646, in send
    r = adapter.send(request, **kwargs)
  File "/usr/lib/python3.8/site-packages/requests/adapters.py", line 516, in send
    raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPSConnectionPool(host='ja.wikipedia.org', port=443): Max retries exceeded with url: /w/api.php?action=query&format=json&titles=%E6%85%
B6%E9%87%8E%E6%9D%BE%E5%8E%9F&prop=coordinates&coprop=type%7Cname%7Cdim%7Ccountry%7Cregion&coprimary=all (Caused by NewConnectionError('<urllib3.connection.VerifiedHTTPSConne
ction object at 0x7f30f4bebeb0>: Failed to establish a new connection: [Errno 101] Network is unreachable'))
07:00 ~ $

It's possible that wikipedia have changed their policies about timing and are doing this to rate-limit you. On a more general note, though - your code should never assume that networks are perfect. If you get an error accessing a network resource, you should always have code to catch that and retry (assuming that you want your code to run reliably). It is also almost always a good idea to have some sort of back-off built in to the retry. So perhaps the first time, you retry after 10 seconds, then again after 20, then a minute etc.

Next, Socket Time out happened for YouTube API. I never met these network problem for several months. It's not only wikipedia API, but also YouTube API. I doubt proxy problem.

Traceback (most recent call last):
    File "csv2pop.py", line 1730, in <module>
      result = get_videos_search(kw_list)
    File "csv2pop.py", line 526, in get_videos_search
      youtube_res = youtube_query.execute()
    File "/usr/lib/python3.8/site-packages/googleapiclient/_helpers.py", line 130, in positional_wrapper
      return wrapped(*args, **kwargs)
    File "/usr/lib/python3.8/site-packages/googleapiclient/http.py", line 849, in execute
      resp, content = _retry_request(
    File "/usr/lib/python3.8/site-packages/googleapiclient/http.py", line 184, in _retry_request
      raise exception
    File "/usr/lib/python3.8/site-packages/googleapiclient/http.py", line 165, in _retry_request
      resp, content = http.request(uri, method, *args, **kwargs)
    File "/usr/lib/python3.8/site-packages/httplib2/__init__.py", line 1948, in request
      (response, content) = self._request(
    File "/usr/lib/python3.8/site-packages/httplib2/__init__.py", line 1621, in _request
      (response, content) = self._conn_request(
    File "/usr/lib/python3.8/site-packages/httplib2/__init__.py", line 1528, in _conn_request
      conn.connect()
    File "/usr/lib/python3.8/site-packages/httplib2/__init__.py", line 1309, in connect
      sock.connect((self.host, self.port))
socket.timeout: timed out
15:47 ~ $

Is it still happening?

I met this Socket Time out twice today. For OSError: [Errno 101] Network is unreachable, many times today.

For the youtube one, perhaps your search is such that it takes longer to get a response than the timeout on your connection. But, again, it could just be youtube slowing your requests. Catching these exceptions and retrying, like I originally suggested is probably your best option.

Thank you, glenn. I'm sure It shoud handle such an exceptions. However, I'd like to know what is exactly happened first, because it might make other unknown problems.

Today, I'm using the same program for several hours, but those [Errno 101] Network is unreachable to wikipedia API has not happened any more. Did you do something around my environments? Around 4:16 p.m yesterday, my running program and the bush console were suddenly stopped, and the console restarted by unknown reason. Did you reset something around? I hope I want to know that for the same kind of problem in the future.

Thank you for your cooperation.

Console servers do need to be restarted from time to time, and when that happens your console will be restarted. it might then come back on a different server -- so if, for example, the server you're connected to initially is blocking your script for some reason, the restart combined with you restarting your script on a different server might help unblock things -- at least until the server you're connecting to decides to block the new console server.

BTW if you have code that you want to keep running without needing to have to manually restart it when console servers are restarted, you should use an always-on task.

That's reasonable. No problem happened from the reset. I'll close this incident. Thank you for your cooperation.