A Dropbox Folder, Some Markdown Files, and an Agent That Does My Job Better Than Me
The selection team at Endeavor selects ~100 companies a year. We are supported in this by > 1,000 mentors who meet with these entrepreneurs to deliver value to them while helping us better understand the companies. We work with our worldwide offices to find entrepreneurs who meet our selection criteria, and the rest comes down to matchmaking — finding the right mentor for each entrepreneur, then handling the logistics to make that meeting happen. At the beginning of this year, we set ourselves a goal: figure out how AI could help us do this work better.
A few months in, we’ve built a system that can do the following:
It recommends mentors for any entrepreneur based on their company and the nature of their challenge. This used to take me about an hour, sifting through a large pool of mentors, and depending on how busy I was, it could take a day or two before I found that hour. The system does it instantly. It regularly surfaces mentors I would never have thought of.
It finds comparable companies in our network at a deeper level than geography and sector. In at least one case, it surfaced a comparison I would never have made.
It sends me a weekly status update on every company I’m working with. It frequently catches things that had fallen through the cracks.
It scours the internet for companies that fit our selection criteria, tracks them in a spreadsheet, and flags news about them.
It drafts email responses directly in Gmail for the more routine correspondence we handle.
It answers questions from my team on Slack. I gave it my personality. My colleagues swear they like it more than me.
We built this with Claude Cowork, Dropbox, and a bunch of markdown files. No engineering team. No custom software.
The framework that guided us: every AI agent needs a brain, a face, and hands.
The Brain
At the center of every AI system is the brain. The brain is what the system knows and how it thinks. In our system, Claude handles the thinking, while Dropbox holds the memory.
We started by uploading the relevant files into a Dropbox folder. Then we gave the system an onboarding, the same way you would with a new colleague. Imagine hiring the smartest person in the world, then giving her all your organization’s information and asking her questions. The odds are that she would struggle to answer because she lacks context. For the system to provide reliable answers, it needed to understand our organization as well as any team member. So we wrote documents explaining our organization, our team, and our core workflows.
Claude drafted most of these, then we reviewed and refined them. We wrote everything in markdown because it’s easy for AI to read, parse, and edit.
We also created a markdown file for each person and each company our team works with. These files are the system’s memory, where it stores everything it knows or learns about each entity. Our approach was inspired by Andrej Karpathy’s LLM wiki concept. Memory is essential for systems to be capable of completing long-running tasks. For the system to respond to emails accurately, it needs to know not just who you work with but the history of your engagement with them. When it sends me a weekly update on a company, it pulls information from that company’s file to identify the founders, what the company does, and where we are in the selection process.
The information an agent needs to do complex work often can’t fit inside a single conversation. It needs somewhere external to store what it knows and retrieve the relevant pieces when needed.
The Face
The face is how you talk to the system and how it talks back. We primarily use Claude Cowork. In our framework, the face is separate from the brain because there are many possible interfaces. We also interface with our system through a Slackbot. OpenClaw famously uses messaging apps as the primary interface. You could build the same brain and talk to it through WhatsApp, a web app, or email. The thinking and the memory sit underneath, independent of how you choose to interact.
The Hands
Hands turn a knowledge base into an agent by giving it the ability to actually do things
The hands in our system come in two forms: connections and skills.
Connections link the system to external tools like Gmail, Google Calendar, Slack, and Fireflies (for meeting transcripts). They let the system pull information from those platforms into its memory, but also act on them: drafting emails, answering Slack messages, checking calendars. The connections are managed through MCP (Model Context Protocol), an open standard that most AI platforms now support. We manage ours through Claude Connectors and Codex Apps.
Skills are how we teach the system to perform repeated tasks. It’s an open-source framework that works across most agentic platforms, and it’s basically a way to give an LLM a set of structured instructions that it can perform repeatedly. Skills are extremely flexible. They can be built on top of each other and can leverage any of the data or connections you’ve built into your system. There are numerous ways to automatically trigger a skill, such as scheduled tasks and routines in Claude Cowork.
What We Learned Building This
This states that all complex systems that work evolved from simpler systems that worked. If you want to build a complex system that works, start with a simpler system and improve it over time. We started by putting files in a Dropbox folder and seeing what Claude could do with them. Then we added contextual documents to help it understand the folder. Then we made it platform-agnostic so OpenAI’s Codex could work on it too. Then we built skills. Then we connected external tools. Then we added memory. Each step was done one at a time and tested, sometimes for hours, sometimes for weeks, before we moved to the next.
Less is more.
Our first real unlock while building the system was realizing that instead of giving it as much information as possible, we should give it as little as necessary. Claude was trained on an enormous amount of data. It already knows how to write a business memo. You don’t need to teach it by uploading all your past writing. You can just give it a template and a set of guidelines.
We found that the more information we loaded in, the more likely the system was to get confused by out-of-context or contradictory information. But stripping information out wasn’t enough on its own. Everything we put into the system needed context: what is this document, how does it relate to other documents in the knowledge base, how does it connect to the people and companies the system knows about? Our process for uploading new files to the system involves Claude writing a short note explaining what the document is and where it fits in the system. You wouldn’t hand a new colleague a document without telling them what it’s for.
Memory is magic.
A system that can’t remember what happened yesterday can’t do complex work. The agent that sends me a weekly update needs to know each company, each founder, and each stage of the selection journey. It needs to accumulate knowledge over time, not start from zero every conversation. Building the memory layer, those markdown files for each person and company, updated as new information comes in, is what transformed our system from a chatbot into something that does work.
Feet on the ground, head in the clouds.
A founder I work with once told me that every day, he wakes up in a new world. The landscape is changing faster than ever. New tools and capabilities are being unlocked every day. We’re building for what exists today, while understanding that things will change every day. My colleague spent a week testing workarounds for connecting our system to Google Sheets. Five minutes after we got off a Zoom call about it, Claude released an update to their Google Drive connector that made every workaround obsolete.
We try to stay ahead of this by keeping things simple. We build as much as we can within Claude Cowork and Codex, knowing those tools will improve over time. We avoid complex custom integrations where we can, because if we switch platforms, we’d have to rebuild them. We use markdown files because they work with almost any AI platform. We use skills for the same reason. The simpler the system, the less you have to rebuild when things change. And things change constantly.
Where to Start
A few months ago, I mostly used LLMs as a souped-up version of Google search. Today, I have multiple agentic workflows that actually save me time and enable me to do things I previously couldn’t. If you’re where I was then, here’s where I would suggest you start:
Pick one workflow that takes you too long. Something you do repeatedly, something that involves searching or synthesizing information. Don’t try to automate your whole job. Just pick one thing.
Get your information into a folder. Write a short document explaining what’s in it and how it all connects. Start a conversation with Claude and see what it can do with what you’ve given it.
Then add one thing. A connection to your email. A skill that runs on a schedule. A memory file for a key client. Test it. Live with it for a few days. Then add another.
The system we have now, the one that matches mentors and tracks companies and drafts emails and answers my team’s questions better than I do, started as a Dropbox folder and a conversation. Everything else grew from there.
