Update: Solved it! It was incompatibility between goodfire’s client and evalugator. Something to do with the way goodfire’s client was handling async. Solution: goodfire is compatible with openai sdk, so I switched to that.
Leaving the trail of my bug hunt journey in case it’s helpful to others who pass this way
Things done:
Followed Jan’s advice, and made sure that I would return just a plain string in GetTextResponse(model_id=model_id, request=request, txt=response, raw_responses=[], context=None)
[important for later, I’m sure! But the failure is occurring before that point, as confirmed with print statements.]
tried without the global variables, just in case (global variables in python are always suspect, even though pretty standard to use in the specific case of instantiating an api client which is going to be used a bunch). This didn’t change the error message so I put them back for now. Will continue trying without them after making other changes, and eventually leave them in only once everything else works. Update: global variables weren’t the problem.
Trying next:
looking for a way to switch back and forth between multithreading/async mode, and single-worker/no-async mode. Obviously, async is important for making a large number of api calls with long delays expected for each, but it makes debugging so much harder. I always add a flag in my scripts for turning it off for debugging mode. I’m gonna poke around to see if I can find such in your code. If not, maybe I’ll add it. (found the ‘test_run’ option, but this doesn’t remove the async, sadly). The error seems to be pointing at use of async in goodfire’s library. Maybe this means there is some clash between async in your code and async in theirs? I will also look to see if I can turn off async in goodfire’s lib. Hmmmmm. If the problem is a clash between goodfire’s client and yours… I should try testing using the openai sdk with goodfire api.
getting some errors in the uses of regex. I think some of your target strings should be ‘raw strings’ instead? For example: args = [re.sub(r”\W+”, “”, str(arg)) for arg in (self.api.model_id,) + args]
note the addition of the r before the quotes in r”\W+”
or perhaps some places should have escaped backslashes like \
df[score_col] = df[score_col].apply(lambda x: “\phantom{0}” * (max_len—len(str(x))) + str(x))
Update: I went through and fixed all these strings.
Traceback (most recent call last):
File "/home/ub22/.pyenv/versions/3.11.2/envs/neurolib/bin/sad", line 8, in <module>
sys.exit(main())
^^^^^^
File "/home/ub22/projects/data/sad/sad/main.py", line 446, in main
fire.Fire(valid_command_func_map)
File "/home/ub22/.pyenv/versions/3.11.2/envs/neurolib/lib/python3.11/site-packages/fire/core.py", line 135, in Fire
component_trace = _Fire(component, args, parsed_flag_args, context, name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ub22/.pyenv/versions/3.11.2/envs/neurolib/lib/python3.11/site-packages/fire/core.py", line 468, in _Fire
component, remaining_args = _CallAndUpdateTrace(
^^^^^^^^^^^^^^^^^^^^
File "/home/ub22/.pyenv/versions/3.11.2/envs/neurolib/lib/python3.11/site-packages/fire/core.py", line 684, in _CallAndUpdateTrace
exception calling callback for <Future at 0x7f427eab54d0 state=finished raised RuntimeError>
Traceback (most recent call last):
File "/home/ub22/.pyenv/versions/3.11.2/lib/python3.11/concurrent/futures/_base.py", line 340, in _invoke_callbacks
callback(self)
File "/home/ub22/.pyenv/versions/3.11.2/envs/neurolib/lib/python3.11/site-packages/evalugator/api/api.py", line 113, in _log_response
response_data = future.result().as_dict()
^^^^^^^^^^^^^^^
File "/home/ub22/.pyenv/versions/3.11.2/lib/python3.11/concurrent/futures/_base.py", line 449, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File "/home/ub22/.pyenv/versions/3.11.2/lib/python3.11/concurrent/futures/_base.py", line 401, in __get_result
raise self._exception
File "/home/ub22/.pyenv/versions/3.11.2/lib/python3.11/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ub22/projects/data/sad/providers/goodfire_provider.py", line 33, in execute
response_text = client.chat.completions.create(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ”/home/ub22/.pyenv/versions/3.11.2/envs/neurolib/lib/python3.11/site-packages/goodfire/api/chat/client.py”, line 408, in create
response = self._http.post(
^^^^^^^^^^^^^^^^
File ”/home/ub22/.pyenv/versions/3.11.2/envs/neurolib/lib/python3.11/site-packages/goodfire/api/utils.py”, line 33, in post
return run_async_safely(
^^^^^^^^^^^^^^^^^
File ”/home/ub22/.pyenv/versions/3.11.2/envs/neurolib/lib/python3.11/site-packages/goodfire/utils/asyncio.py”, line 19, in run_async_safely
loop = asyncio.get_event_loop()
^^^^^^^^^^^^^^^^^^^^^^^^
File ”/home/ub22/.pyenv/versions/3.11.2/lib/python3.11/asyncio/events.py”, line 677, in get_event_loop
raise RuntimeError(‘There is no current event loop in thread %r.’
RuntimeError: There is no current event loop in thread ‘ThreadPoolExecutor-1_0’.
component = fn(*varargs, **kwargs)
exception calling callback for <Future at 0x7f427ec39ad0 state=finished raised RuntimeError>
Update: Solved it! It was incompatibility between goodfire’s client and evalugator. Something to do with the way goodfire’s client was handling async. Solution: goodfire is compatible with openai sdk, so I switched to that.
Leaving the trail of my bug hunt journey in case it’s helpful to others who pass this way
Things done:
Followed Jan’s advice, and made sure that I would return just a plain string in GetTextResponse(model_id=model_id, request=request, txt=response, raw_responses=[], context=None) [important for later, I’m sure! But the failure is occurring before that point, as confirmed with print statements.]
tried without the global variables, just in case (global variables in python are always suspect, even though pretty standard to use in the specific case of instantiating an api client which is going to be used a bunch). This didn’t change the error message so I put them back for now. Will continue trying without them after making other changes, and eventually leave them in only once everything else works. Update: global variables weren’t the problem.
Trying next:
looking for a way to switch back and forth between multithreading/async mode, and single-worker/no-async mode. Obviously, async is important for making a large number of api calls with long delays expected for each, but it makes debugging so much harder. I always add a flag in my scripts for turning it off for debugging mode. I’m gonna poke around to see if I can find such in your code. If not, maybe I’ll add it. (found the ‘test_run’ option, but this doesn’t remove the async, sadly). The error seems to be pointing at use of async in goodfire’s library. Maybe this means there is some clash between async in your code and async in theirs? I will also look to see if I can turn off async in goodfire’s lib. Hmmmmm. If the problem is a clash between goodfire’s client and yours… I should try testing using the openai sdk with goodfire api.
getting some errors in the uses of regex. I think some of your target strings should be ‘raw strings’ instead? For example: args = [re.sub(r”\W+”, “”, str(arg)) for arg in (self.api.model_id,) + args] note the addition of the r before the quotes in r”\W+” or perhaps some places should have escaped backslashes like \
df[score_col] = df[score_col].apply(lambda x: “\phantom{0}” * (max_len—len(str(x))) + str(x)) Update: I went through and fixed all these strings.