Rocksolid Light

Welcome to novaBBS (click a section below)

mail  files  register  newsreader  groups  login

Message-ID:  

Better hope the life-inspector doesn't come around while you have your life in such a mess.


interests / soc.culture.china / More of my philosophy about the window context and about GPT-4 memory and about Canada and about agile methodology and about the maximum token limit and about the other limitations of Large Language Models such as GPT-4 and about future of artificial

SubjectAuthor
o More of my philosophy about the window context and about GPT-4 memoryAmine Moulay Ramdane

1
More of my philosophy about the window context and about GPT-4 memory and about Canada and about agile methodology and about the maximum token limit and about the other limitations of Large Language Models such as GPT-4 and about future of artificial

<cef9d065-147e-4fda-b90a-a78942b55d8en@googlegroups.com>

  copy mid

https://novabbs.com/interests/article-flat.php?id=13393&group=soc.culture.china#13393

  copy link   Newsgroups: soc.culture.china
X-Received: by 2002:a05:622a:1042:b0:3fd:df8d:b231 with SMTP id f2-20020a05622a104200b003fddf8db231mr2016889qte.13.1687095912703;
Sun, 18 Jun 2023 06:45:12 -0700 (PDT)
X-Received: by 2002:a05:6870:98a8:b0:19f:4d51:cfd9 with SMTP id
eg40-20020a05687098a800b0019f4d51cfd9mr495505oab.11.1687095912298; Sun, 18
Jun 2023 06:45:12 -0700 (PDT)
Path: i2pn2.org!i2pn.org!usenet.blueworldhosting.com!diablo1.usenet.blueworldhosting.com!peer02.iad!feed-me.highwinds-media.com!news.highwinds-media.com!news-out.google.com!nntp.google.com!postnews.google.com!google-groups.googlegroups.com!not-for-mail
Newsgroups: soc.culture.china
Date: Sun, 18 Jun 2023 06:45:12 -0700 (PDT)
Injection-Info: google-groups.googlegroups.com; posting-host=66.131.174.130; posting-account=R-6XjwoAAACnHXTO3L-lyPW6wRsSmYW9
NNTP-Posting-Host: 66.131.174.130
User-Agent: G2/1.0
MIME-Version: 1.0
Message-ID: <cef9d065-147e-4fda-b90a-a78942b55d8en@googlegroups.com>
Subject: More of my philosophy about the window context and about GPT-4 memory
and about Canada and about agile methodology and about the maximum token
limit and about the other limitations of Large Language Models such as GPT-4
and about future of artificial
From: amine...@gmail.com (Amine Moulay Ramdane)
Injection-Date: Sun, 18 Jun 2023 13:45:12 +0000
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-Received-Bytes: 25813
 by: Amine Moulay Ramdane - Sun, 18 Jun 2023 13:45 UTC

Hello,

More of my philosophy about the window context and about GPT-4 memory and about Canada and about agile methodology and about the maximum token limit and about the other limitations of Large Language Models such as GPT-4 and about future of artificial intelligence and about the multimodality of Large Language Models such as GPT-4 and about my philosophy and about my contributions here...

I am a white arab from Morocco, and i think i am smart since i have also
invented many scalable algorithms and algorithms..

Here is what has just answered me ChatGPT:

"The window context during the training process refers to the number of preceding words or tokens that the language model considers while learning from the training data. A larger window context would allow the model to capture longer-range dependencies and patterns in the data.

While future powerful quantum computers may provide computational advantages for certain tasks, they do not directly address the challenge of enlarging the window context during the training process. The window context is primarily determined by the architectural design and computational constraints of the language model itself, rather than the specific computing substrate..

Increasing the window context in the training process involves managing memory requirements, computational resources, and optimization techniques specific to language models. These considerations are related to the model's architecture, available memory capacity, and training efficiency. It requires innovations in model design, training algorithms, and memory management techniques.

While quantum computers may offer computational benefits for certain aspects of training or inference tasks, such as optimization or sampling, they do not inherently solve the challenges associated with enlarging the window context during the training process. The solution lies in advancements in language model design, training methodologies, and memory optimization, which are independent of the specific computing substrate used."

I invite you to read the following article so that to understand an important limitation of Large Language Models such as GPT-4:

GPT-4 has the memory of a Goldfish

https://www.theatlantic.com/technology/archive/2023/03/gpt-4-has-memory-context-window/673426/

I think i am highly smart since I have passed two certified IQ tests and i have scored above 115 IQ, and i mean that it is "above" 115 IQ, so i think the above article is not explaining correctly , so i will explain: So another important problem is how to solve the above problem, since Large Language Models such as GPT-4 can only discover patterns with the attention mechanisms etc. only inside a window of the maximum limit of tokens when they are trained , so the problem remains of how to ensure that the global patterns are discovered, since for example an average book typically contains around 80,000 to 100,000 words, so GPT-4 has a window of maximum token limit of 32,000 (equivalent to 25,000 words), so then there is local patterns that require a small window , but there is also global patterns that require a large window, so for example discovering the global patterns in the training data can require one book or two books or more, so then you are noticing that it is an important limitation of Large Language Models such as GPT-4, since the size of the context window can indeed impact the capacity of a language model to understand nuanced concepts and leverage common sense knowledge, since a smaller window may limit the model's ability to capture long-range dependencies, complex relationships, and broader contextual information that are also crucial for nuanced understanding, but making the window of maximum token limit bigger than that of GPT-4, so that to solve efficiently the problem, comes with too much computational limitations and with too much increased inference. So i invite you to read my below previous thoughts so that to understand my views on it and on artificial intelligence:

How Canada could become the third or fourth largest economy in the world by the 2040s

Read more here:

https://www.nextbigfuture.com/2023/06/how-canada-could-become-the-third-or-fourth-largest-economy-in-the-world-by-the-2040s.html#more-183725

So as you have just noticed, i have just talked yesterday about my new thoughts on programming by saying the following:

"So in programming so that to not make the system harder to understand, test, and maintain, you have to implement what you need and you have to minimize at best complexity and you should avoid the duplication of code in your application and you should encapsulate data and behavior in your classes and objects, and take advantage of object-oriented programming (OOP) concepts such as inheritance, composition, and polymorphism to create modular, manageable, and organized code, and of course you have to minimize at best coupling and maximize at best cohesion, and you should well document code so that it be much easier to manage, maintain, and debug and you should run unit tests often, and you have to use meaningful names, and of course you should refactor your code regularly by improving code quality since refactoring makes the code far easier to maintain over time."

But i think i have also to talk about the most important ideas of agile methodology, so of course agile methodology is used so that to adapt efficiently to the changing environment or so that to adapt efficiently to change, so here is my interesting thoughts about agile methodology , so read them carefully:

Here are some important steps of software Evolutionary Design methodology with agile:

1- Taking a little extra time during the project to write solid code and
fix problems today, they create a codebase that’s easy to maintain
tomorrow.

2- And the most destructive thing you can do to your project is to build
new code, and then build more code that depends on it, and then still
more code that depends on that, leading to that painfully familiar
domino effect of cascading changes...and eventually leaving you with
an unmaintainable mess of spaghetti code. So when teams write code,
they can keep their software designs simple by creating software
designs based on small, self-contained units (like classes, modules,
services, etc.) that do only one thing; this helps avoid the domino
effect.

3- Instead of creating one big design at the beginning of the project
that covers all of the requirements, agile architects use incremental
design, which involves techniques that allow them to design a system
that is not just complete, but also easy for the team to modify as
the project changes.

4- When in agile a team breaks a project into phases, it’s called
incremental development. An incremental process is one in which
software is built and delivered in pieces. Each piece, or increment,
represents a complete subset of functionality. The increment may be
either small or large, perhaps ranging from just a system’s login
screen on the small end to a highly flexible set of data management
screens. Each increment is fully coded Sprints, Planning, and
Retrospectives.

5- And an iterative process in agile is one that makes progress through
successive refinement. A development team takes a first cut
at a system, knowing it is incomplete or weak in some (perhaps many)
areas. They then iteratively refine those areas until the product is
satisfactory. With each iteration the software is improved through
the addition of greater detail.

And I invite you to look at step 4 of my below thoughts of software Evolutionary Design methodology with agile, here it is:

4- When in agile a team breaks a project into phases, it’s called
incremental development. An incremental process is one in which
software is built and delivered in pieces. Each piece, or increment,
represents a complete subset of functionality. The increment may be
either small or large, perhaps ranging from just a system’s login
screen on the small end to a highly flexible set of data management
screens. Each increment is fully coded Sprints, Planning, and
Retrospectives.

And you will notice that it has to be done by "prioritizing" the pieces of the software to be delivered to the customers, and here again in agile you are noticing that we are also delivering prototypes of the software, since we often associate prototypes with nearly completed or just-before launch versions of products. However, designers create prototypes at all phases of the design process at various resolutions. In engineering, students are taught to and practitioners think deeply before setting out to build. However, as the product or system becomes increasingly complex, it becomes increasingly difficult to consider all factors while designing. Facing this reality, designers are no longer just "thinking to build" but also "building to think." By getting hands on and trying to create prototypes, unforeseen issues are highlighted early, saving costs related with late stage design changes. This rapid iterative cycle of thinking and building is what allows designers to learn rapidly from doing. Creating interfaces often benefit from the "build to think" approach. For example, in trying to layout the automotive cockpit, one can simply list all the features, buttons, and knobs that must be incorporated. However, by prototyping the cabin does one really start to think about how the layout should be to the driver in order to avoid confusion while maximizing comfort. This then allows the designer iterate on their initial concept to develop something that is more intuitive and refined. Also prototypes and there demonstrations are designed to get potential customers interested and excited.


Click here to read the complete article

interests / soc.culture.china / More of my philosophy about the window context and about GPT-4 memory and about Canada and about agile methodology and about the maximum token limit and about the other limitations of Large Language Models such as GPT-4 and about future of artificial

1
server_pubkey.txt

rocksolid light 0.9.81
clearnet tor