The blogs are organized by date.
Comments will appear when we've had time to check them. Apology for the inconvenience, but it's a way to keep phishers and spammers off the page.
I'm developing more respect for the folks that get useful info out of these applications.
It only took me a few hours to learn how to train it for simple boolean logic. Another day and I had it mapping the numbers 1-4 to a 4-1 sequence only a few thousand times slower than executing the equation y=5-x.
You should not use an AI for the kind of problem you can solve with an equation. It works, but the power in AI is that it can learn how to cope with problems for which a mathematical solution cannot exist.
Like, the English language.
I'm not going to try to generate sentences, the way chatGPT does, but it would be cool to train a neural net to recognize sentence types, and see if it does a better job than the algorithms I developed for Editomat.
Part of Editomat is a dictionary of about 30,000 words tagged with various attributes like what part of speech they can be.
So, it shouldn't be too hard to take that vocabulary and use it to convert a sentence like "The red fox runs quickly" into something like "article adjective noun verb adverb."
That was a plan.
After flailing for hours to no effect, I tried the simple step up from mapping 1->4, 2->3, 3->2 and 4->1 to a random sort of 1->4, 2->1, 3->2 and 4->1.
This seems pretty simple, though it's not something you can represent with a simple algebraic formula.
Several days later, and absolutely no success, I went so far past hair-pulling-frustration that I'm pretty sure I pulled out the hair I had twenty years ago. I wondered where it went at the time, and now I know.
I looked at tutorials on training neural nets, and the basic advice is "try stuff and see if it works".
I tried stuff. It didn't work.
I finally asked the Chat-GPT AI how to configure my neural net. If anyone should understand this, another neural network should.
Scarily enough, it gave me some good advice and even included the piece of information I'd been lacking about what kind of activation functions I should use on which layers and why.
Neural nets are basicly layers of nodes. Each node is a little algebraic function that takes an input, calculates an answer and passes that on to the nodes on the next layer. You train these things by giving them a bunch of known data with a known result. The training session looks at each of the datasets a few thousand times and figures out what set of functions will give the right result for each dataset.
In a trivial example, suppose I want to transform a zero to 1 and a 1 to a zero. I might have a network with three layers: an input layer, an output layer and a "hidden" layer with two nodes on it. Node A on the input layer feeds a value to nodes B and C on the hidden layer and they do a calculation and feed a value to the node on the output output layer.
This is actually a big enough neural network to map 1 to 0 and vice versa, but good luck understanding the weights and mapping functions.
More complex problems need a lot more nodes and a lot more layers.
I spent the early part of my career at the cutting edge of computer tech. I was an early adopter of new techniques and new languages.
I kept worrying that a "thing" would happen, and I'd be unable to understand it, and I'd be un-employable.
Being a SciFi geek, I assumed the "thing" would be something like a direct neural link to the computer, and you'd program the computer by thinking, but nobody over the age of 30 would be able to train their brains to think like that.
I suspect that neural networks are that "thing" that's beyond my comprehension. What's scary is that reading the discussions on the net is that these things are beyond everyone's comprehension.