Rocksolid Light

Welcome to novaBBS (click a section below)

mail  files  register  newsreader  groups  login

Message-ID:  

Whenever people agree with me, I always think I must be wrong. -- Oscar Wilde


interests / soc.culture.china / More of my philosophy about Machine programming and about oneAPI from Intel company..

SubjectAuthor
o More of my philosophy about Machine programming and about oneAPI fromWorld-News2100

1
More of my philosophy about Machine programming and about oneAPI from Intel company..

<slpsjb$ffj$1@dont-email.me>

  copy mid

https://novabbs.com/interests/article-flat.php?id=6520&group=soc.culture.china#6520

  copy link   Newsgroups: soc.culture.china
Path: i2pn2.org!i2pn.org!eternal-september.org!reader02.eternal-september.org!.POSTED!not-for-mail
From: m1...@m1.com (World-News2100)
Newsgroups: soc.culture.china
Subject: More of my philosophy about Machine programming and about oneAPI from
Intel company..
Date: Mon, 1 Nov 2021 19:17:59 -0400
Organization: A noiseless patient Spider
Lines: 202
Message-ID: <slpsjb$ffj$1@dont-email.me>
Mime-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Mon, 1 Nov 2021 23:18:03 -0000 (UTC)
Injection-Info: reader02.eternal-september.org; posting-host="768e013058c128587b88f8c6cb2a14f4";
logging-data="15859"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1/UYzNNerLFtiWa/uEjgn/i"
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
Thunderbird/91.2.1
Cancel-Lock: sha1:MPS4zEo4msIJwq2ndY/oWv/kQrU=
Content-Language: en-US
 by: World-News2100 - Mon, 1 Nov 2021 23:17 UTC

Hello,

More of my philosophy about Machine programming and about oneAPI from
Intel company..

I am a white arab from Morocco, and i think i am smart since i have also
invented many scalable algorithms and algorithms..

I will say that when you know C and C++ moderately, it will not be so
difficult to program OpenCL(Read about OpenCL here:
https://en.wikipedia.org/wiki/OpenCL) or CUDA, but the important
question is what is the difference between FPGA and GPU ? so i invite
you to read the following interesting paper about GPU vs FPGA
Performance Comparison:

https://www.bertendsp.com/pdf/whitepaper/BWP001_GPU_vs_FPGA_Performance_Comparison_v1.0.pdf

So i think from this paper above that GPU is the good way when you
want performance and you want too cost efficiency.

So i think that the following oneAPI from Intel company that wants with
it to do all the heavy lifting for you, so you can focus on the
algorithm, rather than on writing OpenCL calls, is not a so smart way of
doing, since as i said above that OpenCL and CUDA programming is not so
difficult, and as you will notice below that oneAPI from Intel permits
you to program FPGA in a higher level manner, but here again from the
paper above we can notice that GPU is the good way when you want
performance and cost efficiency, then so that to approximate well the
efficiency and usefulness of oneAPI from Intel you can still use
efficient and useful libraries.

Here is the new oneAPI from Intel company, read about it:

https://codematters.online/intel-oneapi-faq-part-1-what-is-oneapi/

And now i will talk about another interesting subject and it is
about the next revolution in the software industry that is Machine
programming, so i invite you to read carefully the following new article
about it:

https://venturebeat.com/2021/06/18/ai-weekly-the-promise-and-limitations-of-machine-programming-tools/

So i think that Machine programming will be limited to AI-powered
assistants that is not so efficient, since i think that connectionism
in artificial intelligence is not able to make emerge common sense
reasoning, so i invite you to read my following thoughts about it
so that to understand why:

More of my philosophy about the limit of the connectionist models in
artificial intelligence and more..

I think i am smart and i will say that the connectionist model like
of deep learning has not the same nature as of the human brain, since
i can say that the brain is not just connections of neurons like
in deep learning, but it is also a "sense" like the sense of touch,
and i think that this sense of the brain is biologic,
and i think that this kind of nature of the brain of being
also a sense is giving the emergence of consciousness and self-awareness
and a higher level of common sense reasoning, this
is why i think that the connectionist model in artifical intelligence is
showing its limits by not being able to make emerge common sense
reasoning, but as i said below that the hybrid connectionist + symbolic
model can make emerge common sense reasoning.

And here is what i said about human self-awareness and awareness:

So i will start by asking a philosophical question of:

Is human self-awareness and awareness an emergence and what is it ?

So i will explain my findings:

I think i have found the first smart pattern with my fluid intelligence
and i found also the rest and it is the following:

Notice that when you touch a cold water you will know about the essence
or nature of the cold water and you will also know that it is related
to senses of humans, so i think that the senses of a human give life
to ideas, it is like a "reification" of an idea, i mean that an idea
is alive since it is like reified with the senses of humans that senses
time and space and matter, so this reification gives the correct meaning
since you are like reifying with the human senses that gives the
meaning, and i say that this capacity of this kind of reification with
the human senses is an emergence that comes from the human biology, so i
am smart and i will say that the brain is a kind of calculator that
calculates by using composability with the meanings that come also from
this kind of reification with the human senses, and i think that
self-awareness comes from the human senses that senses our ideas of our
thinking, and it is what gives consciousness and self-awareness, so now
you are understanding that what is missing in artificial intelligence is
this kind of reification with the human senses that render the brain
much more optimal than artificial intelligence, and i will explain more
the why of it in my next posts.

More of my philosophy about the future of artificial intelligence and more..

I will ask a philosophical question of:

Can we forecast the future of artificial intelligence ?

I think i am smart, and i am quickly noticing that connectionism in
artificial intelligence like with deep learning is not working because
it is not able to make emerge common sense reasoning, so i invite you to
read the following article from ScienceDaily so that to notice it, since
it is speaking about the connectionist models(like the ones of deep
learning or the transformers that are a kind of deep learning) in
artificial intelligence:

https://www.sciencedaily.com/releases/2020/11/201118141702.htm

Other than that the new following artificial intelligence connectionist
models like from Microsoft and NVIDIA that are better than GPT-3
has the same weakness , since i think that they can not make emerge
common sense reasoning, here they are:

"Microsoft and Nvidia today announced that they trained what they claim
is the largest and most capable AI-powered language model to date:
Megatron-Turing Natural Language Generation (MT-NLP). The successor to
the companies’ Turing NLG 17B and Megatron-LM models, MT-NLP contains
530 billion parameters and achieves “unmatched” accuracy in a broad set
of natural language tasks, Microsoft and Nvidia say — including reading
comprehension, commonsense reasoning, and natural language inferences."

Read more here:

https://venturebeat.com/2021/10/11/microsoft-and-nvidia-team-up-to-train-one-of-the-worlds-largest-language-models/

Because i also said the following:

I think i am quickly understanding the defects of Megatron-Turing
Natural Language Generation (MT-NLP) that is better than GPT-3, and it
is that "self-attention" of the transformers in NLP, even if they scale
to very long sequences, they have a limited expressiveness, as they
cannot process input sequentially they can not model hierarchical
structures and recursion, and hierarchical structure is widely thought
to be essential to modeling natural language, in particular its syntax,
so i think that Microsoft Megatron-Turing Natural Language Generation
(MT-NLP) and GPT-3 too will be practically applied to limited areas, but
they can not make emerge common sense reasoning or the like that are
necessary for general artificial intelligence.

Read the following paper so that to understand the mathematical proof of it:

https://aclanthology.org/2020.tacl-1.11.pdf

So i think that the model that will have much more success to or can
make emerge common sense reasoning is like the following hybrid model in
artificial intelligence of connectionism + symbolism that we call COMET,
read about it here:

Common Sense Comes Closer to Computers

https://www.quantamagazine.org/common-sense-comes-to-computers-20200430/

And here is what i also said about COMET:

I have just read the following article about neuroevolution
that is a meta-algorithm in artificial intelligence, an algorithm for
designing algorithms, i invite you to read about it here:

https://www.quantamagazine.org/computers-evolve-a-new-path-toward-human-intelligence-20191106/

So notice that it says the following

"In neuroevolution, you start by assigning random values to the weights
between layers. This randomness means the network won’t be very good at
its job. But from this sorry state, you then create a set of random
mutations — offspring neural networks with slightly different weights —
and evaluate their abilities. You keep the best ones, produce more
offspring, and repeat."

So i think that the problem with neuroevolution above is that the
"evaluate the abilities of the offspring neural networks" lacks common
sense.

So read the following interesting article that says that artificial
intelligence has also brought a kind of common sense to Computers, and
read about it here:

https://arxiv.org/abs/1906.05317

And read about it in the following article:

"Now, Choi and her collaborators have united these approaches. COMET
(short for “commonsense transformers”) extends GOFAI-style symbolic
reasoning with the latest advances in neural language modeling — a kind
of deep learning that aims to imbue computers with a statistical
“understanding” of written language. COMET works by reimagining
common-sense reasoning as a process of generating plausible (if
imperfect) responses to novel input, rather than making airtight
deductions by consulting a vast encyclopedia-like database."

Read more here:


Click here to read the complete article

interests / soc.culture.china / More of my philosophy about Machine programming and about oneAPI from Intel company..

1
server_pubkey.txt

rocksolid light 0.9.81
clearnet tor