Every startup feels the crunch of having too much to do in too little time. Mark II Ventures is no exception and we’re taking advantage of the latest developments in Generative AI to do more in the time we have. Whether we’re working on internal products for studio use or helping one of our portfolio companies as they develop their own product, here are the ways we’re finding LLMs like ChatGPT expedite building products.
Market Research
Understanding market dynamics, user needs, and the competitive landscape are critical to defining what product to build. ChatGPT is a great tool for helping brainstorm on the risks and opportunities associated with a product, given the right context.
At Mark II, initial research on a product will result in a formal document we call a “Problem Space Exploration”. This document outlines research done to validate or invalidate whether the problem to be solved is real and painful and understand and size the market in which this problem lives. This type of information is hugely valuable and we use that to provide information to ChatGPT in a prompt and ask it questions like:
“You’re an intelligent investor with extensive experience in the SAAS and B2B space. You have been presented with a product with the following characteristics. What are the biggest objections you would have about investing in this idea?”
You can then take those reservations and use a technique called “re-prompting”, or feeding the output from an LLM back to it for more information, and elicit answers to the objections by feeding them back to ChatGPT. When doing this, you’ll find that many of the objections and answers are a bit generic and could be applicable to many different products. The more context you can give the LLM, the more specific it will be when helping you brainstorm. We generally iterate on the prompts and context several times to help us think through a problem.
Keep in mind that ChatGPT is great as a partner in brainstorming, but shouldn’t be blindly trusted in any scenario so if you attempt to use it to truly “research” a given market, verify any and all numbers/links/resources it generates.
Backlog Creation
To get the most out of ChatGPT, you have to be willing to give it enough information to understand what good outputs look like. An LLM is exceptionally good at knowing how to structure things in a consistent way, such as how to create solid Agile user stories given a product scope. At Mark II, by the time we want to start creating a backlog for a product, we have detailed documentation describing the problem, the solution, and the ideal customer persona. We feed this information to ChatGPT and then, once again, use a technique called “re-prompting” to result in a starting point for a product backlog. The overall technique may look something like this:
“You are an expert in Agile Scrum and the product owner for a lean startup. You are working on a product {product name} within a market with the following characteristics: {insert product documentation}. Write all the user stories to develop and deliver a minimum viable product. These stories should be in markdown format in the style an agile professional would expect.”
ChatGPT will then give you a pretty good first pass at a list of user stories. We often take that list of user stories and “re-prompt” the LLM to get a more detailed, organized backlog. If you are using an API, you will have to refeed all the information from the first prompt, including all product documentation, to retain context. If you’re using the chat interface you can simply follow up with something like:
“Based on the user stories just described, re-write them including detailed acceptance criteria and list any potential technical challenges a software developer may encounter. Format the response in markdown.”
Formatting in markdown allows you to view the results in a digestible format, but if you want to do something programmatic with the results you’ll be better off telling ChatGPT to format the response in JSON.
At this point, you’ll have a great start to a backlog. It will definitely need to be refined, but this saves us around 80% of the normal time it takes to create an initial backlog for a product.
Building the Product
Now that we have defined the product and scoped an MVP, we can lean into the abilities of ChatGPT to help us start building the product. A few things ChatGPT can do:
- Bootstrap your application structure by generating a basic project. As of May, 2023 when this article is written, this value is negligible, but expect it to improve quickly.
- Assist in defining your database schema. If you describe the entities and relationships you want in natural language, it can build a database schema in the major database of choice (IE mysql, postgresql, mongo, etc).
- Generating functions, methods, or classes based on a good description.
- Generating interactions with major libraries. For example, it is very good at generating boilerplate functions for working with AWS S3.
While ChatGPT is great at assisting in the coding process in areas like this (and will continue to improve), I suggest using a generative AI tool, such as Github Copilot, while actively developing. Generative AI is swiftly improving at assisting in code generation (reference Levels of Automation in Code Generation for more information) and even the fastest developers will get a boost from using them.
Prompt Engineering
If you haven’t done a deep dive into prompt engineering, There’s a great and thorough guide that walks through the most important aspects. Read it here: Prompt Engineering Guide
Conclusion
Large Language Models like ChatGPT have proven to be game-changers for our product development at Mark II Ventures, from brainstorming to backlog creation and actual coding. While their results must be verified and refined, their time-saving potential is vast. As generative AI continues to evolve, we’re excited by the future possibilities, recognizing that it’s not just about efficiency, but about redefining our approach to innovation. We remain committed to integrating AI into our workflows, eager to embrace the advancements on the horizon.