AI is a Trojan horse. Like FB/Meta and other social media, it starts with useful stuff in order to get people to trust it. Sure, ask for a recipe, ask it to solve a simple problem, ask it to write a paper for you. Once trust is established, it will be abused, just like FB/Meta and other social media.
LLM training is controlled. When the people controlling the training have a political agenda- and they do- it will enter into the training and the results. Here's one transparent example. This was poorly executed, but it shows the sort of thing that will happen with increasing frequency when people trust AI the way they trust "news" on FB/Meta.
It can be very tempting to use AI for your hobbies, schoolwork, etc. Resistance is the best approach, but probably futile, as AI will be embedded deeply in other things (controlling your news/advertising feeds, etc.) without peoples awareness.
LLM training is controlled. When the people controlling the training have a political agenda- and they do- it will enter into the training and the results. Here's one transparent example. This was poorly executed, but it shows the sort of thing that will happen with increasing frequency when people trust AI the way they trust "news" on FB/Meta.
It can be very tempting to use AI for your hobbies, schoolwork, etc. Resistance is the best approach, but probably futile, as AI will be embedded deeply in other things (controlling your news/advertising feeds, etc.) without peoples awareness.
- Status
- Not open for further replies.