Neural networks
TLDR
LLMs are big neural networks and a neural network is basically just a bunch of matrices (squares filled with numbers) that we multiply together in fancy ways to make some prediction like "that image contains a cat" or "the next word in this sentence is 'squirrel'". During training, we run the neural network forward and backward a bunch of times until we get good values in these matrices: values that make correct predictions. You can kinda think of that like tuning the strings on some freakishly large guitar. We want the guitar to make the right notes and so we twist knobs until we get there.
Kyle's example code and other stuff
Further reading
The best place to get started are Grant Sanderson's videos on neural networks. If you're following my content in a linear fashion, I suggest you just start with the first four videos in his series.