![](/static/66c60d9f/assets/icons/icon-96x96.png)
![](https://lemmy.world/pictrs/image/ca9b0de3-205f-47ca-a620-5fbddb680695.png)
I don’t know, I used to use Nokia as teenager… Things tend to change quite radically over years
I don’t know, I used to use Nokia as teenager… Things tend to change quite radically over years
How is it useful? Dessert vs food?
“At the end of the lunch nowadays everyone want to have a dessert, but this is wrong because they should have food”…
The sentence AI vs algorithms sounds pretty much like this
AI is a broad family of statistical and simulation algorithms.
They don’t replace algorithms, they are algorithms very powerful for some cases. For other cases they are less powerful, or overkill and they shouldn’t be used. But there is no dichotomy, as one (AI) is part of the other (algorithms)
It’s not an average machine though. It’s a non-linear predictive system. Averages suck in non-linear predictions
2d animation is regarded as a dead market nowadays, with few exceptions. Betting on it is a risk. Disney doesn’t like risks
In the easiest example of a neuron in a artificial neural network, you take an image, you multiply every pixel by some weight, and you apply a very simple non linear transformation at the end. Any transformation is fine, but usually they are pretty trivial. Then you mix and match these neurons to create a neural network. The more complex the task, the more additional operations are added.
In our brain, a neuron binds some neurotransmitters that trigger a electrical signal, this electrical signal is modulated and finally triggers the release of a certain quantity of certain neurotransmitters on the other extreme of the neuron. Detailed, quantitative mechanisms are still not known. These neurons are put together in an extremely complex neural network, details of which are still unknown.
Artificial neural network started as an extremely coarse simulation of real neural networks. Just toy models to explain the concept. Since then, they diverged, evolving in a direction completely unrelated to real neural network, becoming their own thing.
No, what you describe is a basic decision tree. Let’s say the simplest possible ML algorithm, but it is not used as is in practice anywhere. Usually you find “forests” of more complex trees, and they cannot be used for generation, but are very powerful for labeling or regression (eli5 predict some number).
Generative models are based on multiple transformations of images or sentences in extremely complex, nested chains of vector functions, that can extract relevant information (such as concepts, conceptual similarities, and so on).
In practice (eli5), input is transformed in a vector and passed to a complex chain of vector multiplications and simple mathematical transformations until you get an output that in the vast majority of cases is original, i.e. not present in the training data. Non original outputs are possible in case of few “issues” in the training dataset or training process (unless explicitly asked).
In our brain there are no if/else, but electrical signals modulated and transformed, which is conceptually more similar to the generative models than to a decision tree.
In practice however our brain works very differently than generative models
There is not a single if/else in a neural network. You are confusing it with decision trees that are used for classification
“Life is long, I’ll have plenty of time to do it in the future”. Spoiler: life is short
Mine was a comment to say that llms are not just fancy auto complete. Although technically an evolution, it is a bit like saying humans are fancy worms because evolved from worms
Have you thought asking support from a counselor? It might help to cope
You are most probably wrong. How old are you?
Time really cures such feelings. In 20 years you’ll watch back to these events with a very different perspective
Common Reinforcement learning methods definitely are.
LLMs are an evolution of a markov chain as any method that is not a markov chain… I would say not directly. Clearly they share concepts as any method to simulate stochastic processes, and LLMs definitely are more recent than markov processes. Then anyone can decide the inspirations.
What I wanted to say is that, really, we are discussing about a unique new method for LLMs, that is not just “old stuff, more data”.
This is my main point.
A markov chain models a process as a transition between states were transition probabilities depends only on the current state.
A LLM is ideally less a markov chain, more similar to a discrete langevin dynamics as both have a memory (attention mechanism for LLMs, inertia for LD) and both a noise defined by a parameter (temperature in both cases, the name temperature in LLM context is exactly derived from thermodynamics).
As far as I remember the original attention paper doesn’t reference markov processes.
I am not saying one cannot explain it starting from a markov chain, it is just that saying that we could do it decades ago but we didn’t have the horse power and the data is wrong. We didn’t have the method to simulate writing. We now have a decent one, and the horse power to train on a lot of data
We do. I pay to work with it, I want it to do what I want, even if wrong. I am leading.
Same for all professionals and companies paying for these models
It’s a bit like saying a human being is a fancy worm. Technically it is true, we evolved from worms, still we are pretty special compared to worms
LLMs are not markovian, as the new word doesn’t depend only on the previous one, but it depends on the previous n words, where n is the context length. I.e. LLMs have a memory that makes the generation process non markovian.
You are probably thinking about reinforcement learning, which is most often modeled as a markov decision process
What? The reason is that academia does not rewards competency and innovative research. It rewards ability to gather funds, and to streamline paper production. Professors nowadays are often “technically” average, but extremely good startup ceos
We, as society, have become dumb and mean… It’s a pity
Because he was ceo of a company in a critical position to define the future of economy. Currently the tech field is the biggest and most influential of all economic fields. And by tech here we talk about digital world. There’s absolutely no comparable sector at the moment for importance, not even pharma.
It literally defines the modern economy. In the field, openai is an incredibly important company for future relative success and power of big tech companies.
This is why it is so important for world economy