Rocksolid Light

Welcome to novaBBS (click a section below)

mail  files  register  newsreader  groups  login

Message-ID:  

The cost of living is going up, and the chance of living is going down.


interests / soc.culture.china / More of my philosophy about understanding the consequences and about my fluid intelligence and about causality and about the Transformers and about another important limitation of Large Language Models and about planning and reasoning of Large Langua

SubjectAuthor
o More of my philosophy about understanding the consequences and aboutAmine Moulay Ramdane

1
More of my philosophy about understanding the consequences and about my fluid intelligence and about causality and about the Transformers and about another important limitation of Large Language Models and about planning and reasoning of Large Langua

<ddb0818b-2ea4-4c14-86d0-ae4f7c7033dbn@googlegroups.com>

  copy mid

https://novabbs.com/interests/article-flat.php?id=13413&group=soc.culture.china#13413

  copy link   Newsgroups: soc.culture.china
X-Received: by 2002:a05:620a:491:b0:763:9fee:b902 with SMTP id 17-20020a05620a049100b007639feeb902mr347998qkr.12.1687180858561;
Mon, 19 Jun 2023 06:20:58 -0700 (PDT)
X-Received: by 2002:a05:6830:3891:b0:6b1:29d0:355d with SMTP id
bq17-20020a056830389100b006b129d0355dmr1948206otb.0.1687180858293; Mon, 19
Jun 2023 06:20:58 -0700 (PDT)
Path: i2pn2.org!i2pn.org!usenet.blueworldhosting.com!diablo1.usenet.blueworldhosting.com!peer02.iad!feed-me.highwinds-media.com!news.highwinds-media.com!border-1.nntp.ord.giganews.com!nntp.giganews.com!news-out.google.com!nntp.google.com!postnews.google.com!google-groups.googlegroups.com!not-for-mail
Newsgroups: soc.culture.china
Date: Mon, 19 Jun 2023 06:20:58 -0700 (PDT)
Injection-Info: google-groups.googlegroups.com; posting-host=66.131.174.130; posting-account=R-6XjwoAAACnHXTO3L-lyPW6wRsSmYW9
NNTP-Posting-Host: 66.131.174.130
User-Agent: G2/1.0
MIME-Version: 1.0
Message-ID: <ddb0818b-2ea4-4c14-86d0-ae4f7c7033dbn@googlegroups.com>
Subject: More of my philosophy about understanding the consequences and about
my fluid intelligence and about causality and about the Transformers and
about another important limitation of Large Language Models and about
planning and reasoning of Large Langua
From: amine...@gmail.com (Amine Moulay Ramdane)
Injection-Date: Mon, 19 Jun 2023 13:20:58 +0000
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
Lines: 1069
X-Received-Bytes: 55496
 by: Amine Moulay Ramdane - Mon, 19 Jun 2023 13:20 UTC

Hello,

More of my philosophy about understanding the consequences and about my fluid intelligence and about causality and about the Transformers and about another important limitation of Large Language Models and about planning and reasoning of Large Language Models and about the window context and about GPT-4 memory and about Canada and about agile methodology and about the maximum token limit and about the other limitations of Large Language Models such as GPT-4 and about future of artificial intelligence and about the multimodality of Large Language Models such as GPT-4 and about my philosophy and about my contributions here...

I am a white arab from Morocco, and i think i am smart since i have also
invented many scalable algorithms and algorithms..

I invite you to read the following interesting article:

Why artificial intelligence needs to understand consequences

Read more here:

https://www.nature.com/articles/d41586-023-00577-1

And here is my other question to ChatGPT as a proof:

"So let's take for example open source software projects on gitub, so open source projects on gitub are not documented correctly, so ChatGPT can lack to capture causality, and can lack to understand correctly creativity of it ?"

And here is what ChatGPT has answered:

"Yes, that's correct. Open-source projects on GitHub may not always have comprehensive or well-documented explanations of the underlying causal relationships or the creative processes involved. ChatGPT, being a language model trained on a vast range of internet text, including GitHub repositories, might struggle to accurately capture causality and understand the creativity behind these projects for several reasons:

- Incomplete or inconsistent documentation: Open-source projects often rely on voluntary contributions, and documentation quality can vary significantly. Some projects may have well-documented code and explanations, while others may have limited or outdated documentation. In such cases, ChatGPT may not have access to the necessary information to understand the causal relationships and creative aspects of the project.

- Implicit knowledge and context: Developers working on open-source projects often possess implicit knowledge and context about the project, its goals, and the rationale behind specific design decisions. This tacit knowledge may not be explicitly documented and can be challenging for ChatGPT to infer accurately. Without explicit information, the model may struggle to capture the underlying causality or appreciate the nuances of creativity in these projects.

- Limited exposure to specific projects: While language models like ChatGPT have been trained on diverse sources, including GitHub repositories, they cannot access the entirety of the internet or have real-time experience with specific projects. As a result, the model's understanding of any given project is limited to the information it has learned during training. If a project has limited presence in the training data or lacks detailed documentation, the model may struggle to capture the causality or creativity involved.

It's important to recognize that while ChatGPT can provide general information and insights, its understanding of specific open-source projects may be limited, and it may not accurately capture the intricacies of causality or creativity involved. In-depth knowledge from domain experts and consulting project documentation or discussions with the project's contributors would be more reliable sources of information in these cases."

And here is my other question to ChatGPT:

"So even if it is not documented well, humans can decrypt and understand the open source software projects on gitub that are not documented or well-documented, but it is not the same for ChatGPT ?"

And here is what answered ChatGPT:

"Yes, humans have the ability to decrypt and understand open-source projects on GitHub even if they are not well-documented. Humans can bring their knowledge, experience, and problem-solving skills to analyze the code, understand the logic, and make inferences about the underlying causality and creativity involved. Humans can apply their domain expertise and engage in critical thinking to uncover the nuances and intricacies of the project.

On the other hand, ChatGPT, being a language model, does not possess the same level of understanding and problem-solving capabilities as humans. It lacks the ability to interpret code, execute it, or apply real-time reasoning to decrypt the project's workings. While ChatGPT can generate responses based on patterns and associations it has learned from training data, it does not possess the same level of domain knowledge, intuition, and context that humans do.

In summary, humans have the cognitive abilities and expertise to analyze and understand open-source projects, even if they lack proper documentation. They can decipher the code, infer causal relationships, and appreciate the creativity involved. ChatGPT, although proficient in generating text, relies on statistical patterns in data and lacks the human-like comprehension necessary to decrypt and fully understand these projects without explicit and comprehensive documentation."

And i invite you to read my previous below thoughts so that to understand my views:

I think i am highly smart since I have passed two certified IQ tests and i have scored above 115 IQ, and i mean that it is "above" 115 IQ, so i have to explain something important, so since i have just discovered an important pattern with my fluid intelligence and read about it in my below previous thoughts, so i think that Large Language Models such as GPT-4 are capturing causality indirectly from the language and much more easily in the languages like french or english , so in professions like psychology or the law or philosophy that use the language like french or english, it is easy for Large Language Models to capture causality, but in software development , it is more difficult, since for example in gitub where you find the open source software projects, Large Language Models are not able to capture causality correctly, since those software projects in gitub are not languages like french and english, so causality in them is not specified and modeled with
a language like french or english, so you are understanding more the important pattern that i am discovering with my fluid intelligence below that explains a so important thing, so then you are understanding that it is important to help Large Language Models such as GPT-4 to capture causality by modeling it with a language like french or english , by specifying it enough and clearly. So then you are understanding this important limitation of Large Language Models such as GPT-4. So i invite you to reread all my previous thoughts so that to understand my views:

So notice that i am discovering an important pattern with my fluid intelligence since i have a high IQ, so the pattern that i am discovering is below, so then since i have discovered it

I think i am highly smart since I have passed two certified IQ tests and i have scored above 115 IQ, and i mean that it is "above" 115 IQ, so the pattern
that i am discovering with my fluid intelligence, is that Large Language Models such as GPT-4 don't capture causality like we do it we humans, so , so that to capture causality they have to "indirectly" capture it by specifying it enough and clearly by using the "language", so i think it is the limitation of Large Language Models such as GPT-4 , so it is why you are noticing that i am smart by talking as i am talking with ChatGPT below by also making it understand that i am understanding and discovering this pattern with my fluid intelligence. So reread my following previous thoughts so that to understand my views:

I have just forgotten to put other of my important questions and answers
from ChatGPT, so i invite to reread carefully since it is important, and of course you have to understand that Large Language Models such as GPT-4 are based on Transformers, and it explains my kind of questions below to ChatGPT:

I have just posted the following so that to show how Large Language Models such as GPT-4 are not good at software development and why i think that Large Language Models such as GPT-4 will not replace a lot of programmers, so read it carefully and my new more proof of it is also below:

"Perhaps most impressive were the performances of GPT-4 in professional arenas. It scored a 298 out of 400 on the Uniform Bar Exam, which is high enough to be licensed as a lawyer in every state using the UBE system. It also scored a reasonable 75% on the MKSAP 19 medical knowledge test, and an impressive 77% on the Advanced-level Sommelier theory exam.

In programming tests, GPT-4 did worse; the AI struggles with code, it seems.. GPT-4 was able to get 31 out of 41 correct solutions in the "easy" Leetcode test, but got just 21/80 on the medium test and only 3 correct questions on the hard test. Meanwhile, its Codeforces rating is a measly 392, placing it below the 5th percentile of users."

Read more here:

https://hothardware.com/news/openai-gpt-4-model

And here is more logical proof of the above so that you understand why:

Here is my other important question to ChatGPT from my new discussion with ChatGPT that shows another important limitation of ChatGPT , so i invite you to read it carefully:

So here is my first question to ChatGPT:

"Can you explain more with some good and simple example the following:
so you have said: "Lack of Causality: Transformers inherently lack the notion of causality in their architecture. They treat all positions in the input sequence equally and do not have an explicit mechanism to model cause-effect relationships. This limitation can affect tasks that require reasoning about causality or temporal dependencies.""


Click here to read the complete article

interests / soc.culture.china / More of my philosophy about understanding the consequences and about my fluid intelligence and about causality and about the Transformers and about another important limitation of Large Language Models and about planning and reasoning of Large Langua

1
server_pubkey.txt

rocksolid light 0.9.81
clearnet tor