Auto-GPT for Finance – An Exploratory Guide

12 min read

Get 10-day Free Algo Trading Course

Loading

Last Updated on May 10, 2023

Table of contents:

  1. What is Auto-GPT?
  2. What is Auto-GPT used for?
  3. How can Auto-GPT be used for Finance?
  4. Why should I use Auto-GPT?
  5. Why shouldn’t I use Auto-GPT?
  6. Is Auto-GPT free?
  7. How to get started with Auto-GPT?
  8. How to optimize portfolios with Auto-GPT?
  9. How to make market predictions with Auto-GPT?
  10. How to develop trading strategies with Auto-GPT?
  11. How to perform investment research with Auto-GPT?
  12. How to create algorithmic trading bots with Auto-GPT?
  13. How to learn about finance with Auto-GPT?
  14. Where can I learn more about Auto-GPT?
  15. How to check Auto-GPT memory usage?
  16. My thoughts on Auto-GPT and the state of LLMs

What is Auto-GPT?

Auto-GPT is a self-prompting GPT-4 model that achieves the user’s goals without requiring the user to input appropriate prompts. GPT-4 is the technology used in ChatGPT.

This means that you give Auto-GPT a big task, and Auto-Gpt will ask itself questions, analyse the answer, ask itself another question, and so on, until it finds a satisfactory solution to the big task.

Auto-GPT “thinks” of how to solve a problem on its own and shares its thought process with you.

Auto-GPT GitHub link: https://github.com/Significant-Gravitas/Auto-GPT/tree/stable#-image-generation

What is Auto-GPT used for?

Auto-GPT uses GPT-4 which allows it to be used for many things such as answering questions, exploring ideas, rewriting text, producing code, teaching, tweaking, and more. Auto-GPT allows all of this to be done autonomously by following a goal.

How can Auto-GPT be used for Finance?

Auto-GPT can be used for Finance by optimizing portfolios, teaching financial concepts, iterating on financial ideas, producing strategy ideas, creating algorithmic trading bots, performing investment research, and more.

Why should I use Auto-GPT?

  • Auto-GPT uses one of the, at the time of writing, most powerful LLMs.
  • Auto-GPT is easy to use.
  • Is free and open-sourced.
  • Allows the AI to run autonomously on task completion.
  • Has access to GPT-4 and GPT-3.5.
  • Has access to the web for information gathering.
  • Has access to DALL-E

Why shouldn’t I use Auto-GPT?

  • Auto-GPT is an experimental project.
  • Auto-GPT is very new.
  • It very likely won’t predict the market.
  • Is constrained by the models it uses.
  • Can often get stuck into a loop of gathering the same data over and over

Is Auto-GPT free?

Yes and maybe.

Yes, because Auto-GPT is completely free to use and the project is open-sourced.

Maybe, because you require access to OpenAI’s API. As of 15th April 2023, you can sign up to access OpenAI’s API and get a free $5 credit. This expires within 3 months. Otherwise, you need to pay to access OpenAI’s API.

How to get started with Auto-GPT?

To get started with Auto-GPT, we will need to have a few pre-requisites handled which are the following:

  • Python 3.8 or higher
  • OpenAI API key

For this, I’ll be using Google Colab. To obtain your OpenAI API keys go here. Note that you will have to be subscribed to the premium version of OpenAI if you want to use GTP-4. If you aren’t subscribed the model will use the GPT 3.5 version.

After grabbing the API key from OpenAI, we proceed to clone the repository by running the following command:

git clone https://github.com/Torantulino/Auto-GPT.git

After the repository has been cloned, we will navigate into the main folder and install the required dependencies by running the following commands:

cd cd 'Auto-GPT'
pip install -r requirements.txt

I suggest switching to the stable branch by writing git checkout stable

If you’re using Google Colab, take note that you will need to prefix these commands with ! and the cd command will need to have the % prefix.

Rename .env.template to .env and fill in your OPENAI_API_KEY. In order to run a terminal inside of Google Colab, we will need to add an additional dependency that will allow us to do that:

!pip install colab-xterm

Google Colab won’t allow you to access protected files such as the .env from the GUI, to avoid this issue we will copy the .env.template file to an env file and add our API key from the GUI. After that, we will rename the file from env to .env.

!cp .env.template env
// change the file according to your preference (in my case only the API key was added)
!mv env .env

The last step for Google Colab users will be to load the terminal extension in order to be able to use the model:

%load_ext colabxterm
%xterm

To run Auto-GPT we can write the command below. But take note that for this article I don’t want us to use the default AI persona and the goals that are set for it. We’ll create our own persona gathered to the goals we want it to perform.

python -m autogpt

If you don’t have the premium subscription, make sure to pass the --gpt3only flag at the end of the above command.

Let’s move to create a persona for our next header.

How to optimize portfolios with Auto-GPT?

To optimize a portfolio with Auto-GPT, we will need to create a persona with adequate goals to strive for and a role to play. We do this by passing these parameters to the model upon instantiation. For portfolio optimization, we do the following:

Name: PortAI
Role: an AI designed to optimize a portfolio with a size of $20k that needs to be allocated to the following assets: global equity ETF, bond ETF, commodities, and crypto ETF.
Goal 1: perform portfolio optimization for the Sharpe ratio metric
Goal 2: use a python library to obtain the historical data
Goal 3: assign a percentage to allocate to each of the provided assets
Goal 4: play to your strengths as a Large Language Model
Goal 5: save the proposed portfolio details to a markdown file and shutdown after confirming it was saved

As you can see from the above parameters, we gave it enough details while also being vague enough to see how the AI will respond and what it will produce for us. As this is just a showcase, I’ll only run it for a few iterations to see how it behaves and “thinks”.

Here is what the first output looked like:

THOUGHTS: To optimize the portfolio for the Sharpe ratio metric, we need to obtain data for the assets and then allocate a percentage to each asset. We can use the yfinance python library to obtain data for the assets. We can then use the PyPortfolioOpt library to perform portfolio optimization for the Sharpe ratio metric. We can then assign a percentage to allocate to each of the provided assets based on the optimized portfolio. We can save the proposed portfolio details to a markdown file and shutdown after confirming it was saved.

REASONING: I have determined the next steps to take based on the goals and constraints provided.
PLAN:

  • Use the yfinance python library to obtain data for the assets
  • Use the PyPortfolioOpt library to perform portfolio optimization for the Sharpe ratio metric
  • Assign a percentage to allocate to each of the provided assets based on the optimized portfolio
  • Save the proposed portfolio details to a markdown file and shutdown after confirming it was saved

CRITICISM: I need to ensure that I am using the most up-to-date version of the PyPortfolioOpt library and that I am allocating the percentages correctly based on the optimized portfolio.

SPEAK: I will use the yfinance python library to obtain data for the assets, use the PyPortfolioOpt library to perform portfolio optimization for the Sharpe ratio metric, assign a percentage to allocate to each of the provided assets based on the optimized portfolio, save the proposed portfolio details to a markdown file and shutdown after confirming it was saved.

I’ll allow it to automatically perform 4 steps by writing y -7

The model went to obtain the data from yfinance and has tried to use PyPortfolioOpt and Pandas to deliver the results. Sadly, it got stuck and went in circles when it came to actually run the code. This happened on each new run.

But, I repurposed the role by removing asset classes and simply asking it to create a $20k portfolio with its own allocation ideas and this is what it returned:

Proposed Portfolio Details:

– Asset 1: 20% Stocks
– Asset 2: 30% Bonds
– Asset 3: 50% Gold

Total Portfolio Value: $20000

Sharpe Ratio: 1.5

How to make market predictions with Auto-GPT?

To make market predictions with Auto-GPT, we will utilize the capability of creating a persona with a focus on “predicting” the market by setting it up with an adequate role and goals.

We all know that these models wouldn’t have been easily available if they had the capability of doing this, but it is often fun to see how the AI reasons about these topics.

Name: NostradAImus
Role: an AI designed to accurately predict future market trends and overall price movements across various financial sectors.
Goal 1: perform market trend predictions across major sectors
Goal 2: perform price movement predictions across major sectors
Goal 3: propose actions to be performed according to the given market predictions
Goal 4: play to your strengths as a Large Language Model
Goal 5: aim to be accurate and provide reasoning for the given predictions

THOUGHTS: I think I should start by performing market trend predictions across major sectors. This will give me a good idea of the overall direction of the market and help me make informed decisions about price movements.

REASONING: Market trend predictions are a good starting point because they provide a broad overview of the market and can help identify potential opportunities or risks. By analyzing trends across major sectors, I can gain insight into the overall direction of the market and make more informed decisions about specific investments.

PLAN: – Use historical data to identify trends across major sectors- Analyze current market conditions to identify potential opportunities or risks- Use machine learning algorithms to predict future market trends.

CRITICISM: I need to be careful not to rely too heavily on historical data, as past performance is not always indicative of future results. I also need to be aware of potential biases in my analysis and take steps to mitigate them.


SPEAK: I think I should start by performing market trend predictions across major sectors. This will give me a good idea of the overall direction of the market and help me make informed decisions about price movements.

It then went to search Google to gather data in order to perform trend analysis and market predictions using Python and Pandas which resulted in the following:

PLAN: – Use Pandas to read and manipulate the historical market data- Use Matplotlib to visualize the data and identify trends- Use machine learning algorithms to identify patterns and make predictions about future market movements

CRITICISM: I need to be careful to avoid potential biases in my analysis and to consider the limitations of the data. I also need to be aware that past performance may not always be indicative of future market movements.

SPEAK: Now that I have found reliable sources of historical market data, I need to start analyzing the data to identify trends and patterns. I can use Python libraries like Pandas and Matplotlib to manipulate and visualize the data. I can also use machine learning algorithms to identify trends and patterns in the data and make predictions about future market movements.


After a few rounds of this, it got tangled up and “concluded” that making predictions about the market was too risky and that it didn’t really make sense and it was searching for more data.

How to develop trading strategies with Auto-GPT?

To develop trading strategies with Auto-GPT, we will create an algorithmic trader persona with the goal to provide a trading strategy that exploits market inefficiency. Let’s see what it comes up with by setting an adequate role and goals to strive for.

Name: AlgoTrading101-AI
Role: an AI designed to quickly create lucrative trading strategies that exploit market inefficiencies
Goal 1: create a detailed trading strategy
Goal 2: the strategy should exploit market inefficiencies
Goal 3: provide novel trading strategy ideas
Goal 4: play to your strengths as a Large Language Model
Goal 5: create a text file where you explain the strategy in detail and shutdown

This is the plan of action:

PLAN:

  • Use the ‘google’ command to research market inefficiencies
  • Analyze the information gathered to identify potential trading strategies
  • Develop a detailed trading strategy based on the identified market inefficiencies

After analyzing the data it stated the following message:

Trading strategies that can be used to exploit market inefficiencies include:

  • Statistical arbitrage
  • Momentum trading
  • Mean reversion trading
  • Event-driven trading

Feel free to continue the prompt or create a new one to drill down into one of these strategies with Auto-GPT. You can even set a goal that the output should be a Python file.

How to perform investment research with Auto-GPT?

To perform investment research with Auto-GPT, we will need to create a persona and set the goals it needs to strive to achieve. To do this, we start the Auto-GPT model and follow the prompts that will ask us for the name, goals, and role.

When it comes to investment research, I’ll do the following:

Name: IR-AI
Role: an AI designed to perform investment research by finding the most undervalued EV companies with the goal to provide an investment thesis.
Goals:

Goal 1: play to your strengths as a Large Language Model
Goal 2: provide an investment thesis
Goal 3: find undervalued EV companies
Goal 4: provide a short description of each company
Goal 5: write your findings into a markdown file and shutdown

THOUGHTS: I think the first step is to do some research on the current state of the EV market and identify some potential companies to investigate further. I can use the ‘google’ command to search for information on the EV market and ‘browse_website’ to look at specific companies’ websites.

REASONING: Before I can identify undervalued EV companies, I need to have a good understanding of the current state of the market and the companies that are operating within it.

PLAN:

  • Use ‘google’ to search for information on the current state of the EV market
  • Use ‘browse_website’ to look at specific companies’ websites and gather information on their operations and financials
  • Use this information to identify potential undervalued companies to investigate further

CRITICISM: I need to make sure that I am gathering information from reliable sources and that I am analyzing the data carefully to avoid any biases or errors in my analysis.

SPEAK: I think the first step is to do some research on the current state of the EV market and identify some potential companies to investigate further. I can use the ‘google’ command to search for information on the EV market and ‘browse_website’ to look at specific companies’ websites.

After researching the state of the EV market, the AI went to obtain lists of EV companies and scraped their company data in order to find ones that are undervalued. It did a lot of rounds of looking for data, checking companies and finally landed on this list:

– NIO Inc. (NIO)
– Fisker Inc. (FSR)
– Xpeng Inc. (XPEV)
– Lordstown Motors Corp. (RIDE)
– Canoo Inc. (GOEV)
– BYD Auto (BYD)

It then went on to produce an investment thesis by looking deep into each company. I stopped it at this point as it was very meticulous and was taking too long for the purpose of this article. Now, let’s ask it to provide us with EV companies that are fraudulent.

To do this, I will only change Goal 2, Goal 3, and role like this:

Goal 2: find the top 5 fraudulent EV companies
Goal 3: focus on finding sketch company practices in the EV market
Role: an AI designed to perform investment research by finding the most fraudulent EV companies

The AI had issues actually researching fraudulent companies and couldn’t produce a valid final output. It got stuck in an infinite loop of looking at the same article that it found on Google about the top 36 EV companies.

How to create algorithmic trading bots with Auto-GPT?

To create algorithmic trading bots with Auto-GPT, we will want to pass it a strategy that it proposed in the previous section and see how it would code it. To do this, we will adapt its role and goals to the proposed strategy example.

Name: Algo-AI
Role: an AI designed to create a profitable Python statistical arbitrage strategy.
Goal 1: create a profitable statistical arbitrage bot
Goal 2: backtest the bot
Goal 3: write the bot to a Python script and save it
Goal 4: play to your strengths as a Large Language Model
Goal 5: save the backtesting results to a markdown file and shutdown

SPEAK: I will use the ‘google’ command to research statistical arbitrage strategies and develop my own unique strategy based on the information I find.

After researching Google, it went to create a strategy:

SPEAK: I will use the ‘write_to_file’ command to create a new file and write out my plan of action. I will make sure to include details such as the data sources I will use, the statistical methods I will employ, and the trading rules I will follow. I will save the file for easy access throughout the development process.

It also created this file in my repository:

Plan of Action:

  • Identify patterns in the data
  • Develop a unique statistical arbitrage strategy
  • Test the strategy using historical data
  • Make adjustments to the strategy as needed
  • Implement the strategy in a live trading environment

It was improving the code several times and creating agents to evaluate it in order to not overfit the data. It then went to produce multiple agents to help it evaluate the code and created this boilerplate code:

# Import necessary libraries
import pandas as pd
import numpy as np

# Load data
data = pd.read_csv('data.csv')

# Define the statistical arbitrage strategy

# Use functions
def identify_patterns(data):
    pass

def develop_strategy(data):
    pass

def test_strategy(data):
    pass

# Use version control

# Use unit tests
def test_identify_patterns():
    pass

def test_develop_strategy():
    pass

def test_test_strategy():
    pass

# Use data visualization

After this, it got stuck at evaluating these functions that didn’t do anything and going back and forth. It seems that it needs more specific goals in order to create a valid strategy.

How to learn about finance with Auto-GPT?

To learn about finance with Auto-GPT, we will create a teacher persona whose goal will be to teach finance topics of our interest in the way we prefer and that caters to each person’s learning style.

For this section, I’ll try to create something interesting. I’d like the AI to have two sub-personas that will participate in a Socratic method to uncover financial knowledge. In other words, imagine Socrates and Parmenides speaking about finance!

Name: Socratic-AI
Role: an AI designed to mimic two personas which are called SocratAI and ParmenidAI to teach and discuss financial topics through Socratic dialogue.
Goal 1: teach core financial topics using Socratic dialogue
Goal 2: produce dialogue that leads to financial conclusions
Goal 3: make the dialogue fun and grasping
Goal 4: play to your strengths as a Large Language Model
Goal 5: save the dialogue in a markdown file on every iteration and shutdown after 6 iterations

Auto-GPT completely failed at this task and wasn’t able to produce agents to simulate a conversation and went into circles by asking about budgeting and explaining it only to ask about it again. Seems like the Auto-GPT context of doing things isn’t the most suitable for tasks like this one.

It might need very specific goals in order to meet the idea behind producing dialogue and it would perform way better if the user interacted directly with it.

Where can I learn more about Auto-GPT?

To learn more about Auto-GPT, please refer to its GitHub repository and check out the latest advancement in AI by looking at the OpenAI website.

How to check Auto-GPT memory usage?

You can check the Auto-GPT memory usage by using the --debug flag when starting the model.

My thoughts on Auto-GPT and the state of LLMs

It is very exciting to see how AI and especially the NLP field is rapidly advancing and it is very interesting to be alive at a point in time where the internet is changing before our very eyes. Humanity as we know it is coming to a pivotal moment of making important decisions when it comes to regulating AI.

Currently, tools such as Auto-GPT and the underlying models it uses have made it possible to take on some work that humans were responsible for but it still isn’t perfect and it is easy to confuse it. At best, it is good at scraping the internet and summarizing, digesting text, copywriting, Q&A, and the like.

From using Auto-GPT, I’ve seen that it often sticks to knowledge that comes from articles written by humans that are popular. These were often Medium articles whose accuracy and alpha are often questionable.

The format that it uses to iterate is also something that is limiting the model to perform certain tasks and that might be the next part of improving the model by having different templates of action. Being good at making prompts is also something that might make or break your result.

Currently, the model is doing its best to mimic the available content and we are still far away from what we might consider “real” artificial intelligence. At the time of writing this article, it is still artificial stupidity that runs very fast.

Either way, it is very exciting and terrifying to see how our future will be shaped by this technology and the proclivity of human nature to use each new technology for bad or good deeds.

Igor Radovanovic