Rocksolid Light

Welcome to novaBBS (click a section below)

mail  files  register  newsreader  groups  login

Message-ID:  

If God had meant for us to be naked, we would have been born that way.


interests / soc.culture.china / More of my philosophy about Wait-free Eras and Hazard Eras and more..

SubjectAuthor
o More of my philosophy about Wait-free Eras and Hazard Eras and more..World-News2100

1
More of my philosophy about Wait-free Eras and Hazard Eras and more..

<slkl38$vpb$1@dont-email.me>

  copy mid

https://novabbs.com/interests/article-flat.php?id=6452&group=soc.culture.china#6452

  copy link   Newsgroups: soc.culture.china
Path: i2pn2.org!i2pn.org!eternal-september.org!reader02.eternal-september.org!.POSTED!not-for-mail
From: m1...@m1.com (World-News2100)
Newsgroups: soc.culture.china
Subject: More of my philosophy about Wait-free Eras and Hazard Eras and more..
Date: Sat, 30 Oct 2021 19:39:19 -0400
Organization: A noiseless patient Spider
Lines: 255
Message-ID: <slkl38$vpb$1@dont-email.me>
Mime-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Sat, 30 Oct 2021 23:39:20 -0000 (UTC)
Injection-Info: reader02.eternal-september.org; posting-host="8f731ab7d6769f06561fb2706dcda1b5";
logging-data="32555"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1+e11D3a0iHsn8+Z/BEc4Zl"
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
Thunderbird/91.2.1
Cancel-Lock: sha1:3p4U7mOqYYRqUZkV5AnL+dRFvHU=
Content-Language: en-US
 by: World-News2100 - Sat, 30 Oct 2021 23:39 UTC

Hello,

More of my philosophy about Wait-free Eras and Hazard Eras and more..

I am a white arab from Morocco, and i think i am smart since i have also
invented many scalable algorithms and algorithms..

As you have just noticed i have just quickly read a PhD paper below,
and i have also just read two other PhD papers about Wait-free Eras and
Hazard Eras, here they are, read them carefully:

Here is the PhD paper about Wait-free Eras:

https://arxiv.org/pdf/2001.01999.pdf

And here is the PhD paper of Hazard Eras:

https://github.com/pramalhe/ConcurrencyFreaks/blob/master/papers/hazarderas-2017.pdf

And they are two new memory reclamations, but i have just read the PhD
papers and i am finding two defects or disadvantages, and it is that
you have to "fix" the number of threads so that the algorithms work, so
it is not good and it is not flexible, and they are not energy efficient
since the lock-free Hazard Eras does loop like in Lockfree algorithms,
so it consumes a lot of energy, and i think the Wait-free Eras also does
loop a constant number of time Bounded by the number of threads that
make it Wait-free, like in the get_protected() function in the source
code above of the Wait-free Eras, so i think that it consumes too much
energy, so then they are not energy efficient. And about Epoch-based
memory reclamation, it is not robust and you can understand it by
looking at the following video:

VBR: Version Based Reclamation

https://www.youtube.com/watch?v=uWXXNWNwr-w

And notice the following new algorithm from a PhD researcher called
Onefile that is a Wait-free persistent Transactional memory here:

https://github.com/pramalhe/OneFile/blob/master/OneFile-2019.pdf

So the disadvantages of Onefile is that it is not energy efficient
as the above Wait-Free Eras algorithm, also the code is much more
difficult and complex than using the much simpler and the much easy
locking algorithms.

More of my philosophy about transformers limitation and Natural Language
Processing (NLP) in artificial intelligence..

I invite you to read the following about Microsoft Megatron-Turing
Natural Language Generation (MT-NLP) from NVIDIA:

https://developer.nvidia.com/blog/using-deepspeed-and-megatron-to-train-megatron-turing-nlg-530b-the-worlds-largest-and-most-powerful-generative-language-model/

I think i am quickly understanding the defects of Megatron-Turing
Natural Language Generation (MT-NLP) that is better than GPT-3, and it
is that "self-attention" of the transformers in NLP, even if they scale
to very long sequences, they have a limited expressiveness, as they
cannot process input sequentially they can not model hierarchical
structures and recursion, and hierarchical structure is widely thought
to be essential to modeling natural language, in particular its syntax,
so i think that Microsoft Megatron-Turing Natural Language Generation
(MT-NLP) and GPT-3 too will be practically applied to limited areas, but
they can not make emerge common sense reasoning or the like that are
necessary for general artificial intelligence.

Read the following paper so that to understand the mathematical proof of it:

https://aclanthology.org/2020.tacl-1.11.pdf

Read my previous thoughts:

More of my philosophy about Natural Language Processing (NLP) in
artificial intelligence and more..

I think that the transformers in Natural Language Processing (NLP) use a
kind of Deep learning, and Natural Language Processing (NLP)
is a branch of Artificial Intelligence (AI) that enables machines to
understand the human language, so i think that the transformers in
Natural Language Processing (NLP) are using Pruning + quantization that
makes the model much faster and much smaller so that to scale much
better, so i think it is the basic ideas of Microsoft Megatron-Turing
Natural Language Generation (MT-NLP) below, so i think that it is the
way that can make "emerge" in NLP the common sense reasoning and also
reading comprehension and natural language inferences by this way of
‘brute-force’ when the model attains 1 trillion or more parameters. So
read my below thoughts about artificial intelligence so that to
understand more, and you can understand more about Pruning +
quantization by looking at the following video of a jewish PhD
researcher called Nir Shavit that has invented a software called neural
magic that does the Pruning + quantization efficiently:

The Software GPU: Making Inference Scale in the Real World by Nir
Shavit, PhD

https://www.youtube.com/watch?v=mGj2CJHXXKQ

More of my philosophy about the benefits of Exascale supercomputers and
more..

As you have just noticed i have just posted about the following:

Intel's Aurora Supercomputer Now Expected to Exceed 2 ExaFLOPS Performance

Read more here:

https://www.anandtech.com/show/17037/aurora-supercomputer-now-expected-to-exceed-2-exaflops-performance

But Exascale supercomputers will also allow to construct an accurate map
of the brain that allows to "reverse" engineer or understand the brain,
read the following so that to notice it:

“If we don’t improve today’s technology, the compute time for a whole
mouse brain would be something like 1,000,000 days of work on current
supercomputers. Using all of Aurora, if everything worked beautifully,
it could still take 1,000 days.” Nicola Ferrier, Argonne senior computer
scientist

Read more here so that to understand:

https://www.anl.gov/article/preparing-for-exascale-argonnes-aurora-supercomputer-to-drive-brain-map-construction

Also Exascale supercomputers will allow researchers to tackle problems
which were impossible to simulate using the previous generation of
machines, due to the massive amounts of data and calculations involved.

Small modular nuclear reactor (SMR) design, wind farm optimization and
cancer drug discovery are just a few of the applications that are
priorities of the U.S. Department of Energy (DOE) Exascale Computing
Project. The outcomes of this project will have a broad impact and
promise to fundamentally change society, both in the U.S. and abroad.

Read more here:

https://www.cbc.ca/news/opinion/opinion-exascale-computing-1.5382505

Also the goal of delivering safe, abundant, cheap energy from fusion is
just one of many challenges in which exascale computing’s power may
prove decisive. That’s the hope and expectation. Also to know more about
the other benefits of using Exascale computing power, read more here:

https://www.hpcwire.com/2019/05/07/ten-great-reasons-among-many-more-to-build-the-1-5-exaflops-frontier/

And more of my philosophy about the future of humanity:

Read more here:

https://groups.google.com/g/alt.culture.morocco/c/0X024jfzNvM

More of my philosophy about artificial intelligence..
'
AI Generates Hypotheses Human Scientists Have Not Thought Of

Read more here:

https://www.scientificamerican.com/article/ai-generates-hypotheses-human-scientists-have-not-thought-of/

More of my philosophy about artificial intelligence and common sense
reasoning..

"Microsoft and Nvidia today announced that they trained what they claim
is the largest and most capable AI-powered language model to date:
Megatron-Turing Natural Language Generation (MT-NLP). The successor to
the companies’ Turing NLG 17B and Megatron-LM models, MT-NLP contains
530 billion parameters and achieves “unmatched” accuracy in a broad set
of natural language tasks, Microsoft and Nvidia say — including reading
comprehension, commonsense reasoning, and natural language inferences."

Read more here:

https://venturebeat.com/2021/10/11/microsoft-and-nvidia-team-up-to-train-one-of-the-worlds-largest-language-models/

So I think that one hypothesis is that we should be able to build even
bigger models, with trillions of parameters or more, and artificial
common sense will eventually emerge. Let’s call this the ‘brute-force’
hypothesis.

Read more here so that to notice:

https://towardsdatascience.com/the-quest-for-artificial-common-sense-766af7fce292

Also I invite you to look carefully at the following video of a jewish
AI(artificial intelligence) scientist about artificial intelligence(And
read about him here: https://rogantribe.com/who-is-lex-fridman/):

Exponential Progress of AI: Moore's Law, Bitter Lesson, and the Future
of Computation

https://www.youtube.com/watch?v=Me96OWd44q0

I think that the jewish AI(artificial intelligence) scientist that is
speaking on the video above and that is called Lex Fridman is making a
big mistake, since he focuses too much on improving Deep Learning in
artificial intelligence using exponential improvement of computation of
CPU hardware, but i think that it is a "big" mistake and you can easily
notice it by reading carefully my following thoughts and writing:

More of my philosophy about artificial intelligence and specialized
hardwares and more..

I think that specialized hardwares for deep learning in artificial
intelligence like GPUs and quantum computers are no more needed, since
you can use only a much less powerful CPU with more memory and do it
efficiently, since a PhD researcher called Nir Shavit that is a jewish
from Israel has just invented a very interesting software called neural
magic that does it efficiently, and i invite you to look at the
following very interesting video of Nir Shavit to know more about it:


Click here to read the complete article

interests / soc.culture.china / More of my philosophy about Wait-free Eras and Hazard Eras and more..

1
server_pubkey.txt

rocksolid light 0.9.81
clearnet tor