I recently sat down with Hunter Leath, the founder of Archil, to talk about what he’s building and why we were excited to lead Archil’s Series A at Standard Capital.

At a high level, Archil is the file system connecting AI applications to their data. The company started by turning S3 into an infinite POSIX-compatible file system, but the bigger idea is more general: agents and applications need a fast, reliable way to access data wherever it already lives, whether that is S3, Google Workspace, Box, Dropbox, or something else.

That may sound like a low-level infrastructure problem, and in some sense it is. But what makes it interesting right now is AI. A lot of people assume agents need some entirely new interface to data. Hunter’s insight is almost the opposite: LLMs are already extremely good at working with files. They know bash. They know ls, mv, cp, Git, markdown, folders, and all the familiar Unix primitives that have been in the training data forever. If you expose data to an agent as a normal file system, the model often becomes better at using it.

This is one of those ideas that sounds obvious once you hear it, but is very hard to actually build.

Hunter is unusually well-suited to take it on. Before starting Archil, he spent almost a decade at AWS and Netflix working on cloud storage. At AWS, he worked on Elastic File System, one of the few serverless file systems in the market. At Netflix, he saw from the customer side how developers actually evaluate and use cloud storage. The pattern was clear: S3 is a great place for data to live, but a lot of applications still want to interact with that data through a file system.

For AI workloads, that problem is getting more urgent. Agents need context. They need chat history, prompts, files, PDFs, images, CRM records, and sometimes hundreds of terabytes of data. You can’t download all of that to every server that might run an agent. You also don’t want to force every developer to manage Ceph clusters or build custom data movement infrastructure. Archil makes it possible to spin up thousands or even millions of file systems that connect directly to the data an application or agent needs.

The early customer pull has been very strong. Hunter talked about working with Clay, which is using Archil to help agents access large volumes of CRM-related data. He also described deployment and build use cases where Archil can speed up workflows by avoiding unnecessary data movement altogether. In one example, Archil was able to make parts of npm install effectively instant by parsing the lockfile and manifesting dependencies without copying data, improving build times by 30–50% in some cases.

We also talked about why this is the kind of company that takes real patience to build. File systems are not a toy problem. Performance, reliability, durability, compatibility, and correctness all matter. This is not a company that was going to pivot every few weeks in search of an easier idea. Hunter knew exactly what he wanted to build from the beginning, and the customer demand was there before the product was fully ready.

We believe AI is going to change the way infrastructure is designed. The last generation of cloud infrastructure was centered around stateless compute. The next generation will be much more stateful. Agents will need to bring context with them, attach to large remote data sets, and operate for hours or days at a time. In that world, the state may become the fundamental unit of deployment, and file systems may become newly important.

That is why Archil is so exciting to us. File systems never went away. AI may make them more important than ever. We’re thrilled to partner with Hunter and the Archil team as they build the data layer for the next generation of AI applications.