Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
AI + neural networks?
#1
This isn't new, so maybe y'all have discussed it before, but I just happened upon this 

HUO Writer: A simple AI writer with basic Neural Network capabilities (QB64)

Posted on 2020/09/01 by skakos
Categories: Artificial Intelligence, Computers, Programming, Tutorial, Various
Tags: artificial intelligence, BASIC programming, neural network, programming, programming tutorial, QB64, QBasic, simple tutorial, tutorial

Is someone pushing QB64 to the limits?

They have some other interesting tutorials/samples too, like:

Programming a chess application in QBasic (QB64)

Programming for kids: Developing a chess program in BASIC – Part 1
Reply
#2
I haven't seen this before and its right up my alley. Very simplified approach and not much in terms of weights and biases on inputs. But I guess that's the point. Start with a very defined data base, simple decision routine and a user input to sharpen & guide the process.
Reply
#3
(07-15-2022, 06:28 PM)Dimster Wrote: I haven't seen this before and its right up my alley. Very simplified approach and not much in terms of weights and biases on inputs. But I guess that's the point. Start with a very defined data base, simple decision routine and a user input to sharpen & guide the process.

If you build anything along these lines share it here! 
I'm sure lots of people would be interested in learning more about this.
Reply
#4
If I ever get anything working right I will. I'm still in the learning/trial and error of AI. I literally have over 1000 pieces of code trying to mimic multiple neurons. The main program is now well over 15000 lines of code, with the vast majority of that trying to build an error free data base. Our logic operators are a huge challenge. For example I understand IMP and how it should work but I just can't trust the results I'm getting with it. Long story short, my program is long on data and short on intelligence. At my present pace of learning and application I should have something to post in about 10 yrs.
Reply
#5
(07-16-2022, 10:47 AM)Dimster Wrote: If I ever get anything working right I will. I'm still in the learning/trial and error of AI. I literally have over 1000 pieces of code trying to mimic multiple neurons. The main program is now well over 15000 lines of code, with the vast majority of that trying to build an error free data base. Our logic operators are a huge challenge. For example I understand IMP and how it should work but I just can't trust the results I'm getting with it. Long story short, my program is long on data and short on intelligence. At my present pace of learning and application I should have something to post in about 10 yrs.

Heh, that's cool. I don't know anything about this stuff. 
Would it help to just model a single neuron or 2 or 3 and get that working with a tiny data set, and then scale up? 
Aren't neurons supposed to be pretty simple? 
They send and receive signals to and from other neurons, and stimuli, and talk to muscles. 
If you can break it down and simplify, maybe that would help? 
Spoken from a completely naive point of view...
Reply
#6
Yes the neuron structure is very simple - just data in, data worked on, data out. The part that's tricky is the weights and bias which the neuron works with. A weight on a piece of data very much depends on the accuracy of the data base from which the neuron is getting the data in. The weight of a piece of data can simply be computer calculated by summing up all the values of each item in the data base and then the weight is a simply matter of The Value of One item of data / Total Sum of the value of each individual item in the data base. But here's the rub. Lets say your data base is of parts of the body, or trees in a forest and any group of items you want to group together. Depending on the result you are looking for, do you give each item an equal weight, or do some of the items play a greater role in the outcome and therefore take a greater weight. If you don't get the outcome you are looking for, then you can tweak the weights with a bias value and start all over again.

If you get a neuron with an outcome really close to what you expect, then you make other neurons doing a different task but complimentary. Group working neurons into a Preceptron and now you're on your way to predictions or deeper clarity on how the stuff in the data base actually work together.

There is tons of stuff on the internet about Preceptrons, sigmoid function, machine learning, decision theory and game theory, regression analysis, Bayesian "Likelihood Function", decision boundaries, recurrent neural networks just to name a few topics ... you could and I do spend days just reading this stuff and trying to apply what I understand. It is a lot of fun, you get lost in it. It's one of those hobbies that justify a desk top or laptop v's an Ipad.
Reply
#7
(07-16-2022, 03:17 PM)Dimster Wrote: Yes the neuron structure is very simple - just data in, data worked on, data out. The part that's tricky is the weights and bias which the neuron works with. A weight on a piece of data very much depends on the accuracy of the data base from which the neuron is getting the data in. The weight of a piece of data can simply be computer calculated by summing up all the values of each item in the data base and then the weight is a simply matter of The Value of One item of data / Total Sum of the value of each individual item in the data base. But here's the rub. Lets say your data base is of parts of the body, or trees in a forest and any group of items you want to group together. Depending on the result you are looking for, do you give each item an equal weight, or do some of the items play a greater role in the outcome and therefore take a greater weight. If you don't get the outcome you are looking for, then you can tweak the weights with a bias value and start all over again.

If you get a neuron with an outcome really close to what you expect, then you make other neurons doing a different task but complimentary. Group working neurons into a Preceptron and now you're on your way to predictions or deeper clarity on how the stuff in the data base actually work together.

There is tons of stuff on the internet about Preceptrons, sigmoid function, machine learning, decision theory and game theory, regression analysis, Bayesian "Likelihood Function", decision boundaries, recurrent neural networks just to name a few topics ... you could and I do spend days just reading this stuff and trying to apply what I understand. It is a lot of fun, you get lost in it. It's one of those hobbies that justify a desk top or laptop v's an Ipad.

I don't know a lot of neurobiology, but in our own brains & bodies, does a neuron consult every bit of memory in our brain & body "database" before it knows how to react? I think we would all be a lot slower if that were the case. Maybe the "weight" of a certain type of stimulus is summed or stored more locally. There has to be a simpler solution...
Reply
#8
Quote:"...does a neuron consult ever bit of memory in our brain & body "database" before it knows how to react?"

That's a good question mascijr. I am not an expert on how the human brain functions but I have read on the matter of human brain neurons. There is an incredible number of them. They transmit elaborate patterns of electrical signals and are interconnected with each other via Dendrites which receive the inputted signals, then fire out a signal via an Axon. I don't think science knows exactly the internal process of the neuron .. it remains a mystery.. however no doubt I am very behind in my reading of the science in this area. I suspect (and likely wrong) that the neurons of the brain are varied in where they get their input. For example, smell, taste, touch all could have specialize neurons which take input solely from the sensors in our nose, tongue and skin. There could be other neurons which are fired when we are thinking or puzzling out an idea. These neurons could very well be taking their input from stored memory (or the brains data base). It would appear a neuron in terms of AI and computer coding, is a very poor representative of the real thing in the brain. It would seem to follow that the level of intelligence for an AI program could be linked to the number of neurons you build. This I believe is why you often see the work Network associated with neurons.
Reply
#9
(07-17-2022, 01:35 PM)Dimster Wrote:
Quote:"...does a neuron consult ever bit of memory in our brain & body "database" before it knows how to react?"

That's a good question mascijr. I am not an expert on how the human brain functions but I have read on the matter of human brain neurons. There is an incredible number of them. They transmit elaborate patterns of electrical signals and are interconnected with each other via Dendrites which receive the inputted signals, then fire out a signal via an Axon. I don't think science knows exactly the internal process of the neuron .. it remains a mystery.. however no doubt I am very behind in my reading of the science in this area. I suspect (and likely wrong) that the neurons of the brain are varied in where they get their input. For example, smell, taste, touch all could have specialize neurons which take input solely from the sensors in our nose, tongue and skin. There could be other neurons which are fired when we are thinking or puzzling out an idea. These neurons could very well be taking their input from stored memory (or the brains data base). It would appear a neuron in terms of AI and computer coding, is a very poor representative of the real thing in the brain. It would seem to follow that the level of intelligence for an AI program could be linked to the number of neurons you build. This I believe is why you often see the work Network associated with neurons.

I would think the neurons with different jobs are simply connected to the ones their job requires them to work with. Maybe when the neurons in your fingertips detect that they're touching something, they send the signal up your arm, and the signal includes an "address" so that the connected neurons that pass on the signal know who to route it to? 

I would recommend a book "Destination Void" by Frank Herbert which is about building an articial consciousness using something like a neural net. It isn't the newest book (written in 1966 and revised in 1978) but it has some interesting general ideas for how an artificial consciousness might work...

This all definitely is experimental stuff, so probably try a few different approaches and see what happens...
Reply
#10
I don't think we (in our life time) will ever see an artificial consciousness. This implies awareness and the actual creation of an artificial life form. AI right now is very specific task driven like driving a car or facial recognition or playing chess or collecting and analyzing data  ... very specific tasks .... its a very long way from doing what your brain can do. Some of the key people in the AI field, like Nick Bostrom, feel IF a computer can perform at the same level as our brains then this level is called Artificial General Intelligent (AGI). He finds it worrisome because once at this level there WILL a next level which is being called Artificial Super Intelligence.(ASI). A computer operating at ASI could spell the end of mankind. Bostrom has an interesting book on strategies an AI programmers should consider if we want a world where humans can co-exist with an ASI.
Reply




Users browsing this thread: 1 Guest(s)