On the Double Standards of AI in Social Media

Platforms, including LinkedIn, are now quick to shame anyone who even touches AI for creative or experimental purposes. But instead of using this moment to have real conversations about consent, authorship, and ethics, the outrage often feels performative, driven more by ego than principle.

Technological progress is inevitable. Can we focus on thoughtful ethical frameworks instead of knee-jerk moral panic?

Double standards are often rooted in familiarity, not ethics

    It’s acceptable to use voiceovers generated by AI, as long as it’s not explicitly labeled as such. But experimenting with AI reading out passages from a book? Suddenly, that’s a problem.

    One reason AI tools like translators or voice assistants get a pass is because people have gotten used to them. They don’t feel like “AI” anymore. But that doesn’t mean they’re ethically cleaner, it just means they’re normalized. This suggests people’s reactions aren’t grounded in consistent principles, but in comfort zones.

    The “Human Effort” bias

    Online translators, powered by deep learning, are fine, because they’re not marketed as AI or because we’ve stopped seeing them as “AI”?

    There’s a bias where if a human struggled or spent time making something, even if it’s copying someone else, it’s seen as more valid than AI generating something instantly. But is the value of art in the time spent, or in the ideas it expresses?

    The myth of “pure” creativity

    Creating mixed media artwork using Photoshop and incorporating other artists’ work is socially accepted, because it’s seen as “transformative.” But asking AI to generate something for personal use? That’s “unethical.”

    Human creativity has always involved remixing, referencing, and being inspired by others. AI does the same, just faster. The difference isn’t moral, it’s mechanical. If we reject AI for learning from existing works, we should be consistent and critique all derivative art, including fan art, homage, and style mimicry.

    Ethics should be about consent, not tools

    Paying a human artist to mimic another’s style is legitimate. But asking AI to do the same is seen as a violation.

    The core issue should be about consent and credit, who’s being acknowledged, who gave permission, and who benefits, not about whether a human or machine did the creating. The conversation often skips this nuance in favor of outrage.

    These four points aren’t ethical arguments, they’re comfort-based reactions. We accept what we’re used to, and we resist what feels new, even if the underlying principles are the same. Familiarity has replaced fairness in these discussions.

    There’s also a deep bias at play: if a human spends time and effort creating, even if they’re copying, it’s seen as more legitimate than AI generating something instantly. But is creativity about effort, or about expression and ideas?

    Let’s be real, human creativity has always been remix-based. We learn by absorbing and transforming others’ work. AI does the same, just faster and at scale. The difference isn’t moral, it’s mechanical.

    What we should be focusing on is consent, credit, and benefit, not which tool was used. But instead of seizing this moment to have a real conversation about those issues, platforms like LinkedIn and beyond are quick to shame anyone who even touches AI creatively. And often, that outrage feels more like ego performance than true ethical concern.

    Technological progress isn’t going away. The question is: do we want to meet it with fear and moral panic, or with thoughtful ethical frameworks and consistency?

    Leave a comment