Loop a LLM with dynamic Python code generation

I implemented a neural network concept in Python (project aipy on GitHub: GitHub - Julien-Livet/aipy: Artificial intelligence with a network of connected neurons ) and I tested it on the ARC AGI benchmark.
Here’s the virtuous circle I envision:

  1. The task description is given to the LLM.
  2. The primitives.py file contains a number of primitives; this file is read by the LLM.
  3. The LLM adds any missing primitives needed for the task and builds a list of the neurons required for the task.
  4. The engine is executed with the previous list of neurons.
  5. The engine returns the best connection as a string and the associated cost to the LLM.
  6. If the cost is not zero, the LLM analyzes the connection found and the process is repeated from point 3, with the missing primitives according to the LLM analysis, and so on until convergence ideally.

Do you have an example of Python code that would implement this process?