What if the computer virus had AI?

It gets worse:

"The malware uses artificial intelligence to make informed decisions and synthesize its capabilities as needed to conduct cyberattacks and continuously morph to avoid detection."

[..]

Using the current early versions of generative AI, EyeSpy is capable of:
  1. Selecting its intended victim independently or through a threat actor’s specification.
  2. Assessing the target environment, platform, applications and environmental footprint.
  3. Identifying optimal vectors to extract information.
  4. Writing malware on the fly – for example, if a target is on a specific video conference app, it will compose, test and validate the malware for that app.
  5. Executing the attack.
  6. Analyzing the QA result.
  7. Self-repair and continued attack iteration until it has achieved the attacker’s goals.
Jan
 
nice, while that could destroy modern society, ability to do same for human virus would destroy everything :S

Can't stop progress can we? Looks like progress could set the progress back, if modern electronic infrastructure falls into chaos we'd be back some decades, time before we got everything on the internets.
 
My best hope is that most AI get viruses and/or continue to "hallucinate" often enough that we don't put any more trust in them than we do a human in the same situation.

You get speed out of AI, in certain problem sets you get much better attention to detail (e.g. scanning photos/images). I am not sure you get a lot of the other things folks seem to tout. I believe in most cases, the real advantages are in very specific applications (like mail sorting or OCR) where expert systems are sufficient. Anything beyond those we get into the area where judgement plays and while AI may be more amenable to practicing a stricter form of logic, I believe it falls prey to the same or similar issues as a trained human anyway.

Some examples:

Argument: AI doesn't get "tired" - most folks have experienced a program/OS/device with a memory leak or other process that builds up and slows it down which is corrected by a reboot or equivalent.

Argument: AI doesn't get emotional - maybe, but they are susceptible to biases based on faulty premises, bad input data, failure to realize hidden causal links, and probably most common, just not having all the necessary information.

etc.
 
And yet, somehow, generative pre trained AI can't generate bug free code, do maths or look up an accurate track listing on some of my LPs...

I could see it being used to generate more persuasive emails in the modern equivalent of 419 fraud.
 
If the "and yet" was meant as a counter to my post, I am sorry it wasn't clear, my concern is not with the AI, it's with the human implementation and trust of them. If not, sorry for my misunderstanding.

I am not scared of SKYNET or an AI uprising. My position does not depend on AI to be good or [perfect], that fallibility is one of the roots of my concern.

As researchers strive to get better, presumably the AI will generate a "good enough" answer for new applications over time and folks will start to take the output without much oversight and verification - still an extrapolation but seen time and time again (for the most part in benign situations akin to woodworkers or machinists using jigs and fixtures). Combine that with speed and competition, and connect those [ungoverned] AI to compete with one another for that slight advantage - that that's when the real dangers will occur. There are already expert systems galore in operation right now, some may be things like stock trading algorithms with the potential to run wild and cause a crash (we have already seen that connectivity and information flow greatly sped up bank runs even with human actors), and at least one has materially contributed to serial instances of loss of life.
 
Concur.

At least with a human (politician) I believe there is a better chance that folks will be suspicious of motive enough (or enough folks) to provide some balance (apparently not enough in all cases), where I just don't see to many folks similarly mis-trusting something they put into the same category as a ruler or blender.
 
My opinion is as the following and first of all please note that I come from a generation where when writing on a forum there was rigor and respect for all the members.
Just as an example, when on a forum you edited something of a post, whatever it was, even a single word, you never deleted anything, but you marked a line above the text and apologized, instead the current generation sees the presence of places (fortunately not many ones) in which you make the barbaric editing of posts without restraint and for low-minded reasons without even saying they edited and no one seems to care anymore.

However for me that rigor and that respect of past times are still very important, and first of all when you seriously share a news you have also to report the source.
There are many good reasons for having to do this, not just rigour and respect.
Among these reasons is the fact that you currently read so much of that bs everywhere around that first you should also reasonably assess the reliability of the source.

The huge amount of bs that you read around is not a coincidence IMO, but this is a different story.
So you shouldn’t believe everything you read about if you’re smart enough to understand it, but this is a different story too.

The described behavior of that virus does not seem to have anything exceptional compared to what many other viruses do, so I frankly do not even see where the news is.
Furthermore in front of the monitor of a PC there is a person, and there are persons who behave exactly like the description of that virus, doing much more damage to mankind and to themselves (without even realizing the latter).
So I wouldn’t worry about that kind of virus just because I believe that the end user (that is, the "normal" people) have nothing to fear from that virus more than they should for all the other existing ones just because the target of such a thing is not "ordinary" people.

Not to mention the fact that one of the best characteristic of mankind is resilience.
Human resilience is able to create not only the virus, but also countermeasures to nullify the potential invasiveness of the virus, but it can do nothing against barbarism.
 
Hi there,
it appears to me that the thread opener has only very limited knowledge of computer science - to be able to implement something like an AI algorithm in the virus itself would require a huge amount of code - this is impossible to hide - even in a trojan it wouldn't be easy
the very much higher risk is as far as i understand the risk that AI can help the hackers to build a very dangerous virus - but frankly speaking since the real good developers of viruses are employed at the NSA and the KGB - the risk is high anyway
just as an example about the real risks, the NSA has tried already a couple of years ago to get influence which new cryptographic method should replace the good old RSA method - the real reason behind this is that the RSA method with a long key has no built in backdoor like the newer methods that NSA tried to make the new standards - i still use RSA for 2048 or 4096 bit keys for examples for SSH keys and so on - and not the newer ones
https://www.schneier.com/blog/archives/2013/09/how_to_remain_s.html
i worked a couple of years ago for a networking OS company called Novell - with their own network OS NetWare - it was quite difficult to hack, but it disappeared when the transition from x86 to x86_64 started
there was a rumor spread from some developers that the NSA urged Novell to build a backdoor in their OS - in the meantime they imposed severe export restrictions allowing to export only the weak 56bit encrypted version - check also out the "Mossad rumors" about the Checkpoint firewall
https://marc.info/?l=firewalls-gc&m=97598174109157
just my two pennies worth from an IT specialist perspective
 
Last edited:
If such a thing were possible it implies AI could also be used to harden software and embedded hardware code and scan for viruses. Has AI ever been successfully employed in a 'hack this computer' competition or penetration testing?
 
Hi there, there were some months ago rumors about a task that was placed in ChatGPT a secret language to communicate from one instance to another and the the task developed something that was impossible to understand from anyone beside the two instances of ChatGPT themselves that was taking more and more resources from the system to continue the secret conversation... i don't know sound very like an urban legend, but it is a matter of fact that the ChatGPT admins are looking very close to what kind of challenges are submitted, like the easy going for students to use ChatGPT to complete difficult homework in seconds
if you check Google about "AI & responsible innovation", there is a lot going on with this, new regulations to minimize risks of abuse and so on
i would love to get AI to help with the development of buzz and hiss free printed circuit layout for analog pre amp and power amp DIY designs as far as i know there is still no button to get this out of a good printed circuit layout program right now
 
Last edited:
If the "and yet" was meant as a counter to my post, I am sorry it wasn't clear, my concern is not with the AI, it's with the human implementation and trust of them. If not, sorry for my misunderstanding.
It wasn't directly, my fault for not making it clear. Regarding trust, you have to trust that the results are reliable - which, at the moment they aren't, but that may be fixable. What may be more of a problem is the growth of our reliance on AI and our trust of it. In the same way that people blindly followed their satnav into fields or rivers.

More concerning would be bad actors using that reliance to get people to do things they'd ordinarily never consider because as far as they're concerned the AI is the voice of authority. Like the Stanford Prison experiment. We already get glimpses of this with people's credit scores or systems used to determine the likelihood of offenders committing further crimes while on parole. "I can't do that, the computer says....."
 
Hi there, there were some months ago rumors about a task that was placed in ChatGPT a secret language to communicate from one instance to another and the the task developed something that was impossible to understand from anyone beside the two instances of ChatGPT themselves that was taking more and more resources from the system to continue the secret conversation...
That's the plot of Colossus: The Forbin Project. Which I'd recommend watching if you have the opportunity. There's chilling prescience there, even if the technology looks somewhat dated.