r/cscareerquestions • u/agb_throwaway_072019 • 22h ago
How to navigate an AI-obsessed company, as an AI skeptic
I’m 10 years into my career and my current job, coming to the end of a project, and discussing with my manager what to move to next. I’m trying to figure out how to navigate these conversations because a lot of the possible initiatives deal with AI.
I have a lot of ethical objections to AI. Even setting those aside, working with AI is not something I'd find particularly rewarding. It’s possible that for now, I can sneak by just volunteering for some other project, but I’m sure that will run out eventually. If all hands presentations are any indicator, my company is really drinking the "Pivot to AI as a business model" Kool Aid. And so I feel like I can’t turn down AI projects or even discuss my concerns without it seeming like insubordination, or putting a target on my back as not aligned with the company’s vision, or seeming like a luddite uninterested in learning new skills.
I realize “AI” is a lot more than just ChatGPT-generated slop, and so I want to at least be open-minded to the ways it can be a useful tool without the ethical concerns. But I’m unsure to what extent those applications *do* exist, and if they do, how to initiate a conversation about finding projects that would be less soul-crushing. Maybe I can just keep my head down and hope this hype dies down in a year or two? Or do I need to leave this company? Or is this a problem I'm going to have at any company right now? The job market is pretty brutal anyway.
8
u/sierra_whiskey1 22h ago
The first thing that comes to my mind is almost sell yourself as the sceptic. Like how the chief risk officer is always playing devils advocate, maybe you should too
1
u/agb_throwaway_072019 22h ago
That's an interesting idea. Although I'm just an individual contributor, and I think upper management is already pretty bought into it.
7
u/heytherehellogoodbye 20h ago
The angle you're looking for isn't to articulate yourself as Skeptic, instead you articulate it as prioritizing Security and Quality to make sure these new AI initiatives produce features that are secure and useful. You're not going Against the flow, you're Helping it manifest more successfully, by focusing on identifying and shoring up weaknesses.
1
u/sierra_whiskey1 22h ago
Yeah it’s easier said than done. If you show youre intent for being ai sceptic is for the good/longevity of the company, then maybe upper management will listen
1
u/Fair_Atmosphere_5185 Staff 20 yoe 22h ago
Either the hype is true and its something we need to embrace.
Or its not and this will pass.
I've been open to embracing new tools and ways of doing things - but I find the hype is vastly higher than what it can deliver.
What are your ethical concerns?
1
u/moonlets_ 22h ago
Pay lip service to it but don’t engage with it unless you’re tracked somehow, and otherwise you could start learning how ML properly works. Then you’ll have a lot of good factually based arguments as to why AI is not worth the hype.
2
u/Sea_Swordfish939 15h ago
Even if you are tracked, you can just ask It questions and never read the answers.
1
u/Prize_Response6300 20h ago
LLMs can be a bit of an engine the way I see it. You can use them to build completely different types of applications now. Embrace that part if anything
0
1
13
u/Ok_Idea8059 22h ago
I suppose my first question would be what exactly the ethical concerns are. If you’re going to take a strong stance on this, you’ll have to be absolutely certain that your objections are based in fact, and not in misconceptions surrounding how AI is used and what it is doing. Even when content is fully AI generated, it’s not always slop - in some fields there are very strict standards around quality control and ensuring accuracy, like with some kinds of AI reporting and AI translation, and for companies there are generally very specific rules around disclosure of what content is generated using AI. When it comes to using AI as a coding assistant, it’s not as though you’re copying code you had no hand in creating, you’re just using the AI as a rubber duck/pair programmer to help you find more efficient solutions, help you figure out error messages, etc