Ask HN: Does anyone else feel like a 'manager' now, with AI?
I've been an "IC" for aages. Now with agentic AI, I basically am the orchestrator, approver, scheduler, big picture planner. I very rarely dive into the code in the weeds now, despite doing that full time for 10+ years (and having programmed as a hobby for 2 decades before that).
I Can get so much more done. I can approach things I wouldn't have taken on before because my natural limitations would have taken me so much longer to overcome. It's changed the entire way I exist. I have more time to think about the big picture things, not just in work, but in life. I feel more like myself, because I get to be in touch with how I actually feel more of the time, rather than having my head in that pure creative flow space. I still get into that, but it's for the planning, orchestation, rarely the code, or it's for something else unrelated to work.
I love the AI revolution. The biggest thing it's given me is time. I can literally move 100 to 200 times faster with current SOTA agentic tools, in my estimation. I feel like I'm "managing" a bunch of high performing, focused, energetic ICs. It can literally turn regular people into their own little labs. I love the AI revolution. It is so cool.
Anyone else feel this way, or can relate, or want to share their own positive experiences?
The current state of these tools is like calculators, they’re not going to help you be correct, accurate, precise, or anything else unless you already know a bulk of what you’re doing. The mental model is still your own, it’s just extended by access to more information. If you’re basically rote with what you know, then you’re more likely to be wrong.
I guess the way I think of it is, you know that thing they say where they're like "You should employ people smarter than you." That's what AI is to me. You should have people around you that are more capable than you. That's who you want to work with, you can learn from them and succeed with them. That's how I think of it.
In reality, managers and upper-level peers are threatened by people they perceive smarter than them. Your resume wouldn’t even get past screening if you come across as “better” than the current team.
More like an engineer for me. The work is reading the plans, adopting the plans to the code, approving the code, making sure it's up to standards, knowing which standards it should be up to, and so on.
I spend a lot more of my day reading up on foundational stuff rather than typing. It's a bit like electrical engineers not being expected to solder as much stuff by hand anymore. The machine does it cleanly. Doesn't mean the work is below us, but we get to focus on the things that count.
The people who hate AI seem to have plenty of time on their hands to respond to posts like this, so be ready. Expect, “If you’re 100 to 200 times faster and are managing a bunch of energetic ICs, what have you produced?”
You’ll never satisfy them with your answers because there is no world in which they are complimentary of AI tools.
IMO (as someone who strongly agrees with you and loves the AI revolution) keep this kind of thing to yourself. I just don’t see the upside of sharing given how overwhelmingly negative the prevailing sentiment is.
I want to like it, but I can’t relate to people who seem to be able to direct the AI to do all this stuff for them. With what I do, the AI is barely functional. I’m always curious what people are working on when AI is able to effortlessly do it all for them.
There could be some rose tinted glasses helping those folks, but I'm also curious what you do, and which models/tools you try?
I’m only allowed to use Copilot at work. Within that, I’ve tried the various GPTs, Claude, and Gemini. So far, Gemini 2.5 Pro has worked best, but only on fairly limited scopes.
I’m mostly using Ansible currently, which Copilot doesn’t seem great at. On top of that, there is a lot of OpenStack and other enterprise software, where there isn’t a lot of public code for it to train from. And then on top of that I’m having to add a significant amount of business logic and infrastructure specific stuff, which won’t exist elsewhere for the LLM to learn from. If someone has done it before, it is likely in another enterprise and not public code.
Copilot has mostly been useful to write some json queries or regex, that’s about it.
I know Ansible Lightspeed exists. I’m not sure how well that would do, but I don’t think I’m allowed to use that currently. We have a lot of governance around the use of AI.
That's just silly. People are allowed to make grandiose claims and no one is allowed to question their statements? Anyone who is curious about these statements just "hates AI"?
So far seems good.
No, more like a retard.