DKN's AI Post from DKN to L I
Introduction
By time of this writing 2025/01/30
I has been the top Post performing about AI on LI on the list below with
359 impressions for Item #1=Reality of AI system
251for #2=Some Critical Questions
530 for #5=A Proper Way
244 for #4=Validation of AI
1124 for #10=Validation of AI
1036 for #7=DKN's View
- Reality of AI system
- Some Critical Questions To All Serious AI Professionals
- Unrealistic AI-ML
- A Validation of AI System
- A Proper Way Forwardfor AI
- Risky AI-ML
- DKN's View
- Very Tiny AI System
- Very Tiny AI System
ML-AI is extremely dangerours like nuke
ML-AI is extremely dangerours like nuke posted on 2024/04
Why ML-AI is extremely dangerours like nuke
?
Dear LI Fellow
First and foremost is my sincere apology to any inconvenience may cause
from this post
Recent I found a comment from Elon Musk from GGL in meeting with UK
prime minster on 01 Nov 2023 in UK
Tech billionaire Elon Musk said that AI will have the potential to
become the �€œmost disruptive force in history.�€�
First I just want to emphasized here I assume he meant ML-AI( Machine
Learning) Not original AI( first proposed by Warren McCulloch and
Walter in 1943, 80 years ago I cannot reach him for clarity
I absolutely disagree if He meant AI
Even with my fuzzy yeyes and head due to my 2 strokes 7 years ago in
June 2016
Still fortunately my bledssed human Intelligent(BHI) still able to
appreciate his sincere and thoughtfull warnnings regarding ML-AI
I'm a unfortunate retiree with disability at age 69 with limited time
budget left over
Still worried on the well being of 3 my blessed college graduated kids(
2 Daughters Mary Helen+1 Son David)
I'm prepared to shake The Lord hands
I have a joke to share with You
One day Head of Heaven (HofA)call Head of Hell(Hof L) for a chat
HofA:"How going on down there"
HoL :"It's really relaxing and comfortable down here ;-)"
HoA"kidding How come ??"
HoL"No idea Very 1st time an Engr get here
He created a lot like Air Conditioner Electric generator
Washing/Laundry machine etc"
HoA"It must be a huge mistake in letting Engr to be there
So I have no worry my upcomming last minute to meet HoA ha ha ha :-)
I wish to present my analysis on this warning in this post
As usual with my favorite divide & conquer way
We have 2 part= AI+ML
AI system is composed of 3 Layers
(1) Input Layer ( in Training Data)
(2) Hidden Layer (NOT in Training Data for number of neurons set by AI
guy ) to be trained to provide our desired output
(3) Output Layer (in Training Data) for desired output
Each Training data set is simply composed of ONLY Input Data and our
Desired Output DataThat's it Just I/O Data
Nothing more or less
So AI is absolutely not harmful, but helpful as it can povide our
desired output we really want
While the so-called ML-AI is unable to provide such simple IO Data and
passing the blame to Machine
These ML guys must have no idea what they're doing??!
Is there any useful system providing no output
Is there any system with only either input or output nut not both ??!!
That is the key difference between AI and ML-AI
Here is my question to ML-AI guys
What such ML-AI can realy provide us
Are U saying depenging on Machine ??
That is where the root cause of real danger
The Machine could explode blow away all like a nuke bcz absoluely no
guarantee to prevent it from happening
That is why the ML-AI is absolutely not allowed to be developped at all
Am I reasonable or agressive
Now I challenge ML-AI to provide I/O data for a better Edu & Health
Care for USA
May GOD bless USA
Amen
DuyKy Nguyen, PhD in EE
Ex Symmetricom Sr R&D Eng, unfortunate retiree with disabilty
Tiny AI posted on 202402
VTFI (VeryTiny Fractional Intelligence, VTFI)
Is is just a VTFI not AI as discussed in this post
I got involved in AI in 1993( 30 years ago) when my class mate at UTS
asked me to help his brother in AI prj at the end of the brother's AI
class in Computer Science at Sydney Univ
The brother loaned me an AI text book used in the class That's how I
started my AI After my AI Research contract at UTS in 1994 I never
involved in AI So I have no idea on latest AI update Correct me if this
post contains my outdated AI info But I'm too sick in reading so many
hoaxes on AI It's unwise to invest our limit resource time & money
at something we're not sure like AI AI advocate may not agree with me
So may I have their answers to my question below per my outdate AI info
got 30 years ago 1)AI is a neural network similar to human brain
believed to have neurons connected with adjacent ones via a weighed
link and each neuron become active if incoming signal ampitude above
some threshold level Is it still valid ?? 2) Just got answer to my
query "how many neurons are in the brain" from Goggle a human brain has
more than 80 billion neurons, all connected in a massive network that
makes us who we are? My next question is can we implement our AI with
such huge number ? I really don't think so 3) If not What number of
neurons to be use in our AI and WHY ??????????????????
It must be a pratical small number
hI AI expert, would You pz provide me number of neuron U use in your AI
network
I really doubt if it reach 8 million, a very tiny fractional of 80
billion 1 part of 10 million
That's why I call Very Tiny fractional Intelligence
It's smaller as human brain is not active 100%
What are activive types in human brain
processing/reasoning ??%
storage??%
cognitity??%
what else ??
The current AI model for what brain activity is it Processing/Reasoning
?
How these activities interact with each other ??
These issues could be adressed in Neuro Science, if so any new
significant break thru yet??
How Neuro Science cooperate with AI to support each other for some
break thru??
If we cannot answer to these questions and still to want to invest to
AI
We 're all guilty to our next generation in wasting our limit resource
into wrong investment we're really gambling on AI upon our limit
resource, myself first so I have this post
Are my concerns reasonable or too conservative ??
I'm quite aware of tradeoff, no pain no gain
But what we gain with AI and at what cost ??
What pain we left for our next generation is it a painful system of
education and of healthcare ??
Why not stem cell R&D for cure of paiful disease like cancer,
lidney heart ??
I really offer my sincere apology for any incovenience may cause from
this post to any L_I member May God bless USA with right investment for
healthy safety to all future USA fellows Amen
DuyKy, PhD, Ex Symmetricom R&D Engr, unfortunate retiree with
disability
DKN's Viewposted on 2024/01
DKN's View on AI R&D
In this post I wish to present my narrow view on our current AI R&D
Sincerely sorry for any incovenience may case from this post
To the best of my knowledge
Our current AI has been based on the
the McCulloch and Pitts MCP model, named after the two scientists
(Warren McCulloch and Walter Pitts) who proposed it in 1943, 80 years
ago
However per my control back ground and 4 years as tutor in a Control
Lab for Senior EE / Master student and practicing Engineer in 1994-1997
in Instrument &Control EE at University Technology Sydney Australia
All mathemacical models cannot be use before verified
My note on modelling in this control lab can be found below
https://lnkd.in/eqA6ugRC
So my real concern is
Was this mathematical 80-year AI model was ever verified ??
I could not find such info via googling
If probably it's not
So all our AI R&D may have been rendered useless ??
What a real unfortunate we may have waisted alot of our limit real
resource on unreal and worthless work ??
In my control discipline, the followings are strictly hornored
1) system identication for a mathematical model
2) system verification for a evaluation of output of simulation of the
model and output of real system using the same step input
3) once verified a system controller is developed based on the verified
model
As You've seen in my modelling note a mathematical was precisely
derived per physic law with some assumption
All system verifications were not absolutely matched but within
acceptable tolerance
Per my control discipline I've seen as a very productive R&D with
proper working attitude
There must be some kind of problem in any product
Once I was reported a problem in my responsive product
My first job was to verify its existence before start solving
I always see my product is buggy and think seriously on its ill
behavior to prepare precautious accordingly
Being human being err after all
The only way making no mistake is doing nothing but it's not our choice
So I'm not suprised with our wrong AI approach
I truely honor the job well done in creating MCP model
as a mechanism to mimic human brain
Regardless how accurately it behaves like human brain or not
It's was a original contribution to our scientific technical socoiety
somehow in stimulating new area hopefully not any harm to human kind
Even it realy a correct model of human brain
But it's impossible to implement such model with of 80 billion neurons
as in human brain
The more expecting on AI the more wasting on our scare resource
Before move on with AI using this reasonable MCP model
We have to set what our reasonable goal to achieve if not getting it
within some resouce constraint anf time frame
Forget AI once and for all and focus on urgent task in health care
It's unwise to get work done at all cost
Or we want challenge ourseves with AI
Let's challenge ourseves with better health care for all
Sincere Rgards
DuyKyNguyen, PhDEE, ex Syymetricom R&D Engr, unfortunate retiree
with disability!!
Risky posted on 2024/03
A risky ML/AI
Dear my L_I fellow
First and foremost to offer sincere apology to ML- AI professionals
and I also offer my sincere appreciation to correct my outdated AI know
ledge in this post based on what I got in 1994 when I first got into AI
followed by my AI R&D contract at Univ Tek Sydney with NSW medical
center in diabetic prediction for potential diabetic patients
I've got bad feeling from all kind of AI ads whenever I surf www So if
I were not put out this post I really feel serious guilty to the next
generation per their unaffordable and painful healthcare as we waste
our limit invaluable resource[time money]for risky ML-AI
Basically AI is a network of neuron[neu-net] with connection whose
weights to be determined in a training to get desired output. The
neuron is activated for output if input above some level using
switching function per math model known as perceptron proposed by Mc
Culloch in 1943
In 1960, there was a demonstration of the �€œperceptron�€� �€“ �€œthe
first machine learning,�€� by Frank Rosenblatt known as father of deep
learning
Traning data required to eliminate error between desired/actual
output.A positive sum of squared error as multivariate function of
weights used in training; ZERO is minimum of positive number hence a
numerical minimal method used in this training; minimal methods are
classified by 1st derivative gradient class with line search [LS] like
steepest descent [SD]and gradient +LS+2nd derivative Hessian[incl
curvature for better convergence quicker done] like Conjugate Gradient
or no gradient at all like Nelder Mead; it is in a iteration loop
requiring exit condition must be provided like error below 1e-6 [part
per million] SD is the simplest and the worst widely used currently
hence no converegence, no result
Obviously there's no problem at all with desired output as they are
what we want
However the real problem is input data must be validated
Even worse as input data may not guarantee getting correct weights for
desired output as the training might be early terminated due to
divergence of computational method [ bad data-in or round-off err in
computation ?] when a max number of iteration say 100k is reached while
the error above the predefined tolerance
and the results are rendered useless per big difference between
desired/actual outputs
Getting training data is very costly and ML came to exist
But this also bring a new serious problem how to validate input data
So ML is not promising anything at all !!??
Data getting more complicated and huge over time so it's a real big
task to get data validated
So I urge not to invest in ML/AI until data validation fully addressed
It appears to me we have some very challenging to deal with before
AI/ML is acceptable
1) validation of input data and data distribution for convergence of
numerical method
2) Efficient minimal methods for any kind of data
May God bless happy healthy wealthy life for all USA people
Amen
Sincere Regards
DuyKy Nguyen, PhD in EE
Proper Way Forward For AI posted on 2024/04
A proper way forward of AI/Ml and its usage
dear my LI fellow
In my most recent AI post, I did mention validate all data used and
data are getting a lot more huge over time. So certainly this task is a
lot more challenging
As my favorite way of divide & conquer, we should look for a better
DB [data base] so it'd make this task more feasible
In addition DB itself can used by it self alone as AI as DB can be used
to recognize or to predict like AI with appropriate DB structure to be
developed
I bet some AI is disguised with DB underneath
Rather a true AI with neurons
I overheard concern of AI would take over human and render human in
redundant
I absolutely disagree that negative view and I have a positive view
instead
AI is really just a helpful assistant to human in minimizing confusion
for human in finalizing decision on action It never ever be a decision
maker in place of human
It's crucial to put anything in its own strength to maximize its usage
for the sake of human
I 'm a HW guy with no data base[DB] at all back ground and sincere
appreciate if data base profs help me get started in data base in any
language Java C++ I failed to do so in few attempts
I asked my SW partner with me at work before and they told me no DB
development in USA all outsourced to India !!?? What a short sight and
narrow mind!!??
May Gods bless happy healthy wealthy lives for all USA people
Amen
Duy-Ky PhD EE
Validation of AI System posted on 2024/10
Dear my LI fellow
First and foremost Ihave no idea why this post appear with the
mileading title "new proposal of sinplified stucture of AI system"!!??
I also done such work and do have such document
but not ready to post as it may cause some unwanted impact on AI
community Just want more time to validate my new structure It is realy
embarrassing if the new structure with bad fearure as the current one
??!! God bless USA and bless me to find new structure along with new
training algorithm done in eye blink with hidden layer of 4 millions
Nodes[ no Activate function per its ZERO deivative as analized in the
document] the training done in 0.87 second I did try with 4 billion but
Octave outof memory on my w7-64 bit with 32 G RAM !!??
So I'm more than willing to release new such structure to US AI
authority for the sake of USA competitiveness in AI hopefully to make
sure USA in the forefront of AI technology for national interest
As a serious Control professional where system identification and
validation sis the very first task required to be done this working
attitude in my blood in very long time
I'm glad to have done some kind of this kind of task for AI system
based on my limit calculus background
To the best of my knowledge It probably the very first work in this
trend Hopefully it start new wave of action in this trend rather just
taking for grant and start something seriously on something not
validated yet . It's not professional at all in doing so I truely do
offer my sincere apology if I may hurt feeling of somebody in any way
But if I were not then I feel guilty to my self and also the next
generation
so I have 3 articles for this purpose to get around so many LI rpost
estrictions I have to to put put on my personal page hopefully it's
understandable
https://lnkd.in/g9ivrGBv Math_of_AI dkn_optimiz04 Octave Programming
Calculus My Calculus
https://lnkd.in/grDCK5Sy
https://lnkd.in/gEYz2ygn
https://lnkd.in/gPegnCnb
Octave programming should be read first as a tool to have some kind of
simulation for fun
Also have my note on calculus it is very old stuff but nobody care to
simplify
But I found some college student painfully struggling with it so I
wrote this note
https://lnkd.in/gkHg5Xau
Absolutely nothing new but comprehensive, painless and compact
presented in an unified way from limit to derivative up to Laplace
transform to solve differential equation but no integral to compute
volume and the such this can be easily using numerical method on Octave
That's why I'm now 70 retired after 2 strokes and not fully recovered
but keep study new stuff
Sincerely appreciate if I got your help in showing your successful AI
training using Activate function
I've been quite aware it work with such function
But all sudden I failed using it in my simulation of training and found
the root cause is its ZERO derivative
DuyKy Nguyen, PhD EE @unitedthc.com
Some Critical Questions To All Serious Ai Professionals
AI questions to serious AI posted on 2024/11/24
dear my serious great AI professionals
May I have your honest straight answers on my questions below based on
my 20 yrs in NPI [ New Product Initiative] and on my original
contribution in my PhD works and ability in turning biz around in any
work place I have been in VN Australia and USA
Solution to any problem type could be of any type
For example there're HW/SW component in modern equipment so product
problem could be HW or SW type but solution could be HW or SW
regardless problem type
Unfortunately AI system is trained on some type of data input whether
ML or not so it cannot provide efficient solution as expected
Before really having AI commitment we should have screening evaluation
in real world first and try best an existing technology
1. Is there any new breakthrough in AI recently If not We have to
follow divide conquer way to look at sub AI systems like: Data/Nero
Science, computational methods Otherwise AI may get stuck in a dead end
2. Have You been quite aware of any successful AI implementation in the
real world ?
3. What are its structure like how many neurons input, output ?
4. How it was implemented tools library platform ?
5. Have You attempted to duplicate it ?
6. If You are ML[Machine Learning] AI professional How your ML works ?
Doe it collect training data by it self and what are collecting device
Are they available currently How input data are qualified/ verified or
they are simply used with out any such mechanism at all ?
I did have one as listed in my Experience in 1995 but in prediction of
diabetic patient but unfortunately the predictions was wrong Even I
really did it with MatLab on Window 3 unfortunately I could not run it
successfully for some unknown reason after 29 years and thru some OS
upgrade window 3 win 2000 win 7
If You cannot verify/ validate AI model
At least You should do on its implementation
What I'm really concerned is it's implementation was not a real AI
system but something else like just data base search for their own
interest nor for public AI interest
For the sake to save our limit resource time money and human effort
I'm more than willing to offer my free AI consulting via my email to
dkn@unitedthc.com in hoping our limit resource reserved for real biz
not for un real and risky biz AI
Honestly speaking all automatic system like Au pilot etc can be seen as
AI powered
But they have no neurons at all!! ??
If You as AI professional have no answer to those question You may want
to save your resource foe something real for the sake of The People
We currently do have a lot serious urgent and challenging task like
poor healthcare sub standard living conditions like poor health care
low quality but expensive
Truly and Sincerely Appreciate
May GOD bless us with bright idea for the sake of happier and wealthier
USA
Amen
Reality of AI systemposted on 2024/12/26
Reality of AI system::2024/12/26
A qualified system must satisfy criteria below
1. theoretical/mathematical model
2. The model must be verified
3. Must be realizable and stable
As specific illustration with Control technology well used in all areas in decades with features below
1. Mathematical derived on differential physic law with a transfer function composed of Numerator and enominator
2. It's back by system identication& verification as seen in this article ==> ctl_mdl
3. Realizability by controllability theorem
4. Stability by Stability theorem simply ensure non-ZERO Denominator
We all are quite familiar with Wireless phone
To my best knowledge with some experience at Venture Design Service in Mar 2013 - Mar 2015
For wireless operational
signal from a transmit [Tx] phone going to nearby wireless towel
The Receive [Rx] phone get signal from nearby towel
A Tx/Rx in each Towel it has Controlled Oscillator Without it Wireless towel cannot be operational
Now going back to AI
Its model based on perceptron model by Mc Cullog in 1943 using Activate function
However it's never been verified either using human brain or mathematical simulation
current AI system appear to have no stability criterion
All it has is training process of perceptron with Activate function of ZERO-gradient
However the training process based on gradient-type optimal method
Therefore the training ended up in failure
My conclusion is
current I systemappears unoperational
Therefore
I came up with new AI system with powerful training method done in blink of eye
I wish to have a chance to validate my new AI structure
Any idea truly appreciated toy email dkn@unitedthc.com
ay The Lord bless You with great happy healthy wealthy life
Merry Christmas and Happy New Year