The only private key that my agents have access to are temporary AWS access keys to a dev environment with decently locked down permissions.
I let it troubleshoot my web code using a temporary JWT in a dev environment using headless chrome and Puppeteer in a Docker container.
Everything else is in AWS Secrets Manager inaccessible by the IAM role the agent has access to.
I don’t store the temporary AWS keys in a file anywhere. They are in environment variables. All AWS SDKs and the CLI look in the environment variables by default.
I sure as hell don’t store API keys anywhere on my local computer.
Well since in my case all of the LLMs I use are hosted by AWS Bedrock, it means I can get away with only caring about AWS Access keys.
If I need to store database passwords in secrets manager, I can just pass the ARN of the secret manager key in the connection string. I often don’t need to even do that and prefer to use the Data API to access Aurora Postgres/Mysql and that also uses the IAM permissions.
Even for access to EC2 instances I use an IAM controlled Session Manager proxy to access it over SSH/RDP.
But Secrets Manager just works. It’s a simple API/ClI command and the permission system to access it is very granular.
Why not treat them like other users? Give them some sort of indirect access like Antiphony. Give them their own keys that you can rotate and revoke. If you're worried about leaks, you might as well run it "self-hosted" like on Bedrock.
If I get my stuff hacked (because I use a machine with nothing else on it other than coding agents) I'll know these services are not removing my personal info from their logs.
I don't operate chinese models where my high value api keys are.
It's pretty hard to debug stuff without using real api keys, service accounts etc...otjerwise
I wanted to ask almost this question, then saw that it is on #1 right now.
My use case is ssh. I would like to stick my private key into a local Docker container, have a ssh-identical cli that reverse proxies into the container, and have some rules about what ssh commands the container may proxy or not.
As a precaution I would probably never pass secrets directly to the agent at all. Something like a placeholder format where the actual substitution happens at execution time so the LLM never sees the real value. Keeps things cleaner if something ever goes wrong.
nope. too dangerous - i'm personally working for an agent project and i know from personal experience they do collect your session log - especially in china lol. one approach i use for my own agent is that to use keyring to store all secrets. agent will call a tool to request for it, and it will be something like <secret:gmail.password>. the substitution happens at tool execution time so the llm never sees or logs the actual value.
keyring is one of solution but even substituting values at excution does not gaurantee the security as agents can read the process itself.
im building a safe agent execution layer, A runtime where agents can act, but cannot access secrets. kinda sidecar that is callable by agent for using api keys, secrets, private keys, etc and plus one can add policy on how and what a agent can do.
yah keyring is more for static protection. when the agent process itself is hostile, keyring is kinda obsolete.
but then i think the key is that sometimes agent does need access to credentials to be useful - like i will give some credentials to agent such as my browser account access.
personally i feel it is not really about preventing agent from accessing credentials, but more to have the supervision layer when agent access it - like you know exactly when and why agent need to access it and you have the ability to deny or approve it.
Absolutely not, and if you do this then please please rotate keys every day or two.
The only private key that my agents have access to are temporary AWS access keys to a dev environment with decently locked down permissions.
I let it troubleshoot my web code using a temporary JWT in a dev environment using headless chrome and Puppeteer in a Docker container.
Everything else is in AWS Secrets Manager inaccessible by the IAM role the agent has access to.
I don’t store the temporary AWS keys in a file anywhere. They are in environment variables. All AWS SDKs and the CLI look in the environment variables by default.
I sure as hell don’t store API keys anywhere on my local computer.
something that you dont like about using AWS secrets Manager or think it should be handle differently?
im researching around building a execution environment that handle the secret + actual execution, any input is appreciated
Well since in my case all of the LLMs I use are hosted by AWS Bedrock, it means I can get away with only caring about AWS Access keys.
If I need to store database passwords in secrets manager, I can just pass the ARN of the secret manager key in the connection string. I often don’t need to even do that and prefer to use the Data API to access Aurora Postgres/Mysql and that also uses the IAM permissions.
Even for access to EC2 instances I use an IAM controlled Session Manager proxy to access it over SSH/RDP.
But Secrets Manager just works. It’s a simple API/ClI command and the permission system to access it is very granular.
Why not treat them like other users? Give them some sort of indirect access like Antiphony. Give them their own keys that you can rotate and revoke. If you're worried about leaks, you might as well run it "self-hosted" like on Bedrock.
I share with gemini, claude and openai.
If I get my stuff hacked (because I use a machine with nothing else on it other than coding agents) I'll know these services are not removing my personal info from their logs.
I don't operate chinese models where my high value api keys are.
It's pretty hard to debug stuff without using real api keys, service accounts etc...otjerwise
so you prefer using separate VM machines?
I wanted to ask almost this question, then saw that it is on #1 right now.
My use case is ssh. I would like to stick my private key into a local Docker container, have a ssh-identical cli that reverse proxies into the container, and have some rules about what ssh commands the container may proxy or not.
Does anyone know of something like this?
if you usecase is just about dealing with private key and txn signing why not use any KMS service?
No, more like letting an agent interact safely with an HPC frontend. No cloud, no Windows
As a precaution I would probably never pass secrets directly to the agent at all. Something like a placeholder format where the actual substitution happens at execution time so the LLM never sees the real value. Keeps things cleaner if something ever goes wrong.
is there any tool that can do this ?
I use mitmproxy outside of agent vm
interesting, how do you use mitmproxy for calling openAI llm ? or what exactly you use it for ?
Like everything else. You don't share you private, personal data, credit card numbers with the rest of the world, just like that. ;)
i am okay, i trust that they have great guards to prevent leak any api
which agent framework or tool gives guarantee for leaks?
nope. too dangerous - i'm personally working for an agent project and i know from personal experience they do collect your session log - especially in china lol. one approach i use for my own agent is that to use keyring to store all secrets. agent will call a tool to request for it, and it will be something like <secret:gmail.password>. the substitution happens at tool execution time so the llm never sees or logs the actual value.
keyring is one of solution but even substituting values at excution does not gaurantee the security as agents can read the process itself.
im building a safe agent execution layer, A runtime where agents can act, but cannot access secrets. kinda sidecar that is callable by agent for using api keys, secrets, private keys, etc and plus one can add policy on how and what a agent can do.
does this seems good?
yah keyring is more for static protection. when the agent process itself is hostile, keyring is kinda obsolete.
but then i think the key is that sometimes agent does need access to credentials to be useful - like i will give some credentials to agent such as my browser account access.
personally i feel it is not really about preventing agent from accessing credentials, but more to have the supervision layer when agent access it - like you know exactly when and why agent need to access it and you have the ability to deny or approve it.
so do we need something like `safe agent execution layer - that is policy enforced` (SEAL) we can manage what should be allowed and what not
agent uses llm to plan the action, but the actual execution happens in SEAL.
any example where it would make sense to start with?
open for thoughts
No :)
does you agent only print "Hello World" on console ? or it uses any service ;)