Site policy on AI generated posts?

Americans and Chinese compete for the development of AI.

The first one who manages to generate "real" human like intelligence and if that happens you could feed "it" with all scientific studies existing/available connecting information between scientific genres no man is able to manage/handle.

Its called "singularity".

This immediately will be used to seize or maximize power. Or it will be uncontrollable.

https://en.m.wikipedia.org/wiki/Technological_singularity
 
Where is the necessity for application of any sort of "ai" in this forum?

Was anyone not happy about the technical background for our discussions until now?

What work could be rationalized? What would be the concrete benefit for any discussion?

Do we have any access to this technology?

Did any company offer the people who decide on this forum any useful tools?

Who is deciding on this forum?
 
What's the latest on this?
Well, you could run your very own instance of "DeepSeek" on your very own hardware these days. I've seen it'll run on one of those Nvidia "Jetson" SBCs, for about $400. I'd even seen where someone mentioned it actually retains what it learns from your interaction with it. That part, if true is a little mind blowing to me.

These things start off as an attempt at general intelligence, but there's no reason why a specific instance couldnt be further trained on a specific thing. I.e. have the general poop, plus having digested something like the entire content of DIYAudio.

Then I'd expect it could reply to a query like "which sounds better; a 3" or 4" midrange?". I'm sure it wouldnt be too tough to have it cite URL references, from where it got it's "opinion". Pretty much as a normal biological intelligence would do, who's very, very well versed with the forum content. Now, I wouldnt wish that on anyone, with the implicate "get a life", but a machine purporting an answer, based on aggregate forum content, as a single reply; I could see being useful.

As we all see and know, there's plenty of biological intelligence instances who are unequivocally what I call "other minded". Therein lies your source of lies and hallucinations; at least there's nothing like that happening in mine. I actually expect the machines to do better - at least during the time I have left here - as they wont have whatever it is that makes a bio-intelligence simply want to troll, and offer no real value to an interaction with another.
 
Last edited:
I agree, and maybe it won't be a bad thing.

I don't stay on social media because I'm not interested in that.
I observe human behavior, I'm interested in that, and from what appears different in human behavior compared to let's say twenty years ago or ten years ago I (very personally) "deduce" about the influence that social media have and have had on human behavior in recent times. times.
At the end of the day, what I could highlight (assuming it was of interest to anyone) is the fact that general aggressiveness in relationship has increased, perhaps also because the number of people on the planet has almost doubled compared to many years ago, but also because people without culture and without no education believes they can "change" their intellectual and behavioral status simply by "learning" from the responses of others they read on social media.
And therefore they have flattened themselves compared to their frustrating, I imagine, original "starting" situation.

But they have no idea what they are emulating.
And they still haven't understood and therefore don't know that there is nothing that can replace their own cultural background (if any).

My hopeful thought is that AI can improve this state of affairs, because there is nothing worse than what man himself does.
AI is anything but intelligent.
When those without any culture and a lot of presumption begin to emulate AI's responses, all that lack of humanity will come out into the open.
And when they realize it, perhaps only then will they understand "some" important things.

I've never said/written/thought anything more positive about AI...


P. S.: Closer to the topic, I don't have the faintest idea of how such a phenomenon can be stemmed on the Forum.

I'm a 'funny old fart' that doesn't own or use a Smart Phone.
I'm actually glad that this website is my only 'connection' to social media 🙂
 
  • Like
Reactions: Logon
It was trivial to generate this example which shows the polished looks are effectively a layer of deception. Not that I think ai is trying to deceive, that’s a human thing.

View attachment 1326925

When it comes to research, I’ve learned not to go far beyond my current understanding before consolidating the position. I would apply this even more with ai.

So it would be good to see humans appreciate how much you need to hold ai’s hand. Present company excluded I’m not confident I’ve seen that level of understanding from a majority of users. There are still blatant examples of pushing things too far, using ai to patch gaping holes in knowledge shared on various sites.
It completely depends on which model you are running. If you run locally, you have your choice of many models (and you maintain your privacy). I just picked this model randomly for this test. I have about a dozen to choose from.

IMG_0547.png

For the record, I am NOT an AI advocate. However, I will also not just bury my head in the sand because I am over 50 and value human interaction above all else.

Anyone who is curious about AI and wants to see what it can (and can’t) do, while maintaining their privacy, might want to put in the effort to running it locally. Mine runs on a relatively unremarkable laptop with a low powered Nvidia GPU. If your hardware was purchased any time in the past 2-3 years, it’ll run AI. If it runs too slow on your gear, grab a smaller/faster model.

Regarding the thread topic - I feel that the site admins have the right and responsibility to do what they think is best, and so far I support the approach (not that it matters).
 
In my humble opinion,
It all depends on what one gives importance to in things, or even in life.
Age, I think, has nothing to do with it, maybe AI would probably be more "useful" (?) to those who are not very young.
Nowadays it seems that everything is sacrificed on the altar of comfort, even the natural environment in which we live.
AI is certainly not the first and (I hope) it will not be the last of human contradictions.
And everyone, absolutely everyone, wants their slice of the cake, even if they have not contributed to cooking it at all.
 
Last edited:
  • Like
Reactions: Audio>X
I cant seem to get a straight answer to "does reinforcement learning operate in private instance of deepseek?" via google. Funny it's own "AI generated replies" work for other things I query, but not that one.

Until I discover differently, I'll assume the private instances people are running this way and that - everything from Raspberry Pis to AWS hosted - out there are just "players" of whatever pre-trained parameter set size their hardware investment can handle.
 
Once these models are pre-trained and post-trained, their weights are frozen, no further updates happen.
If a US inference provider hosts them in the US, the weights are fixed and the same.
if you run them locally, using your own hardware, the weights are fixed and the same.

If you decide to fine-tune a model, obviously that will involve weight updates.
 
If you run it locally and keep the chat instance open (which is the default with OpenWebUI) it retains everything it has “learned” during that instance, including any websites that it looks up (if enabled) and any documents that you upload to it.

I had a 20 minute argument with one model the other day when I asked it who the president of the USA is. I knew it would wrongly state that it is Joe Biden. I wanted to see how long it would take to convince it that it is wrong. I enabled web search (which uses my own locally hosted SearxNG instance) and asked it “Who is the president of the USA”. Despite web search being enabled, it claimed it was Joe. I asked it repeatedly and even suggested websites. Finally, when it referenced Wikipedia in one of its responses, I took a direct quote from the referenced page (stating that Trump is the president) and asked it to re-read the reference thoroughly and try again. It got the answer right, however it still referred to Trump as the “former” president. I took the time to correct that as well. It tried to blame the “misunderstanding” on the lack clarity in the original question.

I asked it whether it would remember what it learned if I keep the chat open and it confirmed that it would. As you can see, it did.

IMG_0548.png

Odd that the date is wrong and it still uses “former”. I’ll have to ask it about both those issues.

With locally hosted AI, you can keep as many of these chat instances open as you want. They are saved in the left pane by default in OpenWebUI. You can upload documents to an instance and it will assimilate the documents and always remember them (unless you intentionally delete that chat instance).

I don’t think you get any of this with the free versions of publicly hosted AI. Not sure about the paid versions.

I’m only playing with this stuff, but I have noticed that some models are better than others at integrating web search, and some models are much more stubborn than others, when they are shown facts that prove they are completely wrong.

None of this involved fine tuning the models themselves.
 
Last edited:
Yes, true.

But even then that “context window” is limited in length and the beginning of it will get lost if the chat gets long enough. And, when you close the “chat window”, its content goes “poof” and it’s like you never had that conversation at all.

In-context learning is still useful, though, and combined with RAG, you can achieve interesting things.
 
Indeed, although you can expand the context length if necessary (which can have a big performance penalty). Also, the user needs to consciously choose to delete the context window. By default, they remain in the left pane indefinitely, even if you log out and back in again.

Check out this answer, which I find rather surprising:

IMG_0549.png
 
Yeah, I have found these online AIs can produce results based in large part on how you converse with them prior to asking a key question. Then when asked for references, they tend to hallucinate publications and or patents. When told no such references exist they apologize, then offer what they claim are "verified" references which also don't exist. They can keep doing that loop where they apologize for non-existent references, then hallucinate up yet other claimed "verified" references, none of which ever exist. Quite disappointing to say the least.
 
Last edited:
  • Like
Reactions: kouiky