@maybeimaleo the training is data ingesting. It was still coded originally and given rules and guidelines. Coded to understand “natural language “.
There is no randomness it’s the illusion of randomness cloaked in the collective belief that it’s smart or magic.
@JGNWYRK That's true. There is outboard and I/O programming. But the statistical inferences leading to one output over another (and different outputs for the same prompt) are due to weighting in the neural net nodes that results from ingesting the training data. These aren't random, of course, but determining why an LLM gave one particular answer is, well, tough at very least.