Configuration
All options for PgFileSystem, including limits, sandbox controls, and vector search.
PgFileSystemOptions
const fs = new PgFileSystem({
db: sql, // SqlClient (required)
workspaceId: "workspace-1", // string (default: random UUID)
version: "main", // active version label (default: "main")
// Limits
maxFileSize: 10 * 1024 * 1024, // max file size in bytes (default: 10 MB)
maxReadSize: 5 * 1024 * 1024, // max read size (default: unlimited)
maxFiles: 10_000, // max files per workspace (default: 10,000)
maxDepth: 50, // max path depth (default: 50)
maxSymlinkDepth: 16, // max symlink indirection (default: 16)
maxCpNodes: 10_000, // max nodes per cp -r (default: 10,000)
statementTimeoutMs: 5000, // query timeout in ms (default: 5000)
// Sandbox
rootDir: "/", // root directory (default: "/")
permissions: {
read: true, // allow read operations (default: true)
write: true, // allow write operations (default: true)
},
// Vector search
embed: undefined, // embedding function (default: undefined)
embeddingDimensions: undefined, // expected dimensions (default: undefined)
})Full Reference
| Option | Type | Default | Description |
|---|---|---|---|
| db | SqlClient | - | Database client (required) |
| workspaceId | string | UUID | Workspace identifier for multi-tenant isolation |
| version | string | "main" | Version label this instance reads from and writes to inside its active version root. See Versioning. |
| maxFileSize | number | 10 MB | Maximum file size in bytes for writeFile/appendFile |
| maxReadSize | number | - | Maximum bytes returned by readFile (throws E2BIG if exceeded) |
| maxFiles | number | 10,000 | Maximum files per workspace |
| maxDepth | number | 50 | Maximum directory nesting depth |
| maxSymlinkDepth | number | 16 | Maximum levels of symlink indirection before ELOOP |
| maxCpNodes | number | 10,000 | Maximum nodes a single recursive cp may traverse |
| statementTimeoutMs | number | 5000 | PostgreSQL query timeout (SET LOCAL statement_timeout) |
| rootDir | string | "/" | Root directory - operations are sandboxed within |
| permissions | FsPermissions | { read: true, write: true } | Enable or disable read/write operations |
| embed | (text: string) => Promise<number[]> | - | Embedding function for semantic search |
| embeddingDimensions | number | - | Expected vector dimensions (validated on write) |
Limits
Limits protect against runaway usage. When a limit is exceeded, the operation throws a descriptive error.
// Restrict to small files and fewer total files
const fs = new PgFileSystem({
db: sql,
workspaceId: "sandbox",
maxFileSize: 1024 * 1024, // 1 MB max per file
maxReadSize: 512 * 1024, // 512 KB max read
maxFiles: 1000, // 1,000 files per workspace
maxDepth: 20, // 20 levels deep
maxSymlinkDepth: 8, // 8 levels of symlink indirection
maxCpNodes: 1000, // 1,000 nodes per recursive cp
statementTimeoutMs: 2000, // 2s query timeout
})Sandbox
Use rootDir and permissions to sandbox operations. Paths outside rootDir throw EACCES. Disabled permissions throw EACCES on any matching operation.
// Read-only filesystem scoped to /data
const fs = new PgFileSystem({
db: sql,
workspaceId: "reader",
rootDir: "/data",
permissions: { read: true, write: false },
})
await fs.readFile("/data/config.json") // OK
await fs.writeFile("/data/new.txt", "hi") // throws EACCES
await fs.readFile("/etc/secrets") // throws EACCES (outside rootDir)Vector Search
To enable semantic and hybrid search, provide an embed function. Embeddings are automatically computed on writeFile and appendFile for text content.
const fs = new PgFileSystem({
db: sql,
workspaceId: "workspace-1",
embed: async (text) => {
const res = await openai.embeddings.create({
model: "text-embedding-3-small",
input: text,
})
return res.data[0].embedding
},
embeddingDimensions: 1536,
})