I feel like just about everyone knows of (through friends/family/community) or has experienced some form of life changing harassment, assault, blackmailing, scamming, or otherwise malicious activities that have happened over social media / online platforms and the perpetrators are rarely if ever caught or punished. And it feels like thanks to the AI boom these problems are only going to get significantly worse over the next decade or so, maybe longer, thanks to the ability for bad actors to easily scale their efforts and create more realistic lies/threats/setups using AI tools to accomplish what they might not have had the skills to do beforehand.
I have little doubt that people looking back in the future on the first few decades of widespread internet usage and adoption without guardrails in place will see it as a huge moral failure on society's part. We've created a system that has enabled predators, scammers, and the like all over the world to cause untold amounts of harm to victims they would otherwise never have encountered, and get away with criminal behavior that, lacking a physical out in the open component isn't seen or caught by anyone until the damage is already done.
Instead of slowing down or reflecting on if a 24/7 deluge of social media / entertainment / bullshit novelties are really worth the harm companies are just ramping up the potential for future danger in the name of profit with all these new AI tools for "innocent" purposes.
I feel like just about everyone knows of (through friends/family/community) or has experienced some form of life changing harassment, assault, blackmailing, scamming, or otherwise malicious activities that have happened over social media / online platforms and the perpetrators are rarely if ever caught or punished. And it feels like thanks to the AI boom these problems are only going to get significantly worse over the next decade or so, maybe longer, thanks to the ability for bad actors to easily scale their efforts and create more realistic lies/threats/setups using AI tools to accomplish what they might not have had the skills to do beforehand.
I have little doubt that people looking back in the future on the first few decades of widespread internet usage and adoption without guardrails in place will see it as a huge moral failure on society's part. We've created a system that has enabled predators, scammers, and the like all over the world to cause untold amounts of harm to victims they would otherwise never have encountered, and get away with criminal behavior that, lacking a physical out in the open component isn't seen or caught by anyone until the damage is already done.
Instead of slowing down or reflecting on if a 24/7 deluge of social media / entertainment / bullshit novelties are really worth the harm companies are just ramping up the potential for future danger in the name of profit with all these new AI tools for "innocent" purposes.