"Quite clearly, our task is predominantly metaphysical, for it is how to get all of humanity to educate itself swiftly enough to generate spontaneous social behaviors that will avoid extinction."
R. Buckminster Fuller
Tuesday, October 17, 2023
AI and Knowing Self
Conceptual impressions surrounding this post have yet to be substantiated, corroborate, confirmed of woven into a larger argument, context or network.
"Classical perspectives on symbols in AI have mostly overlooked the fact that symbols are fundamentally subjective—they depend on an interpreter (or some interpreters) to create a convention of meaning. Thus, human-like symbolic fluency is not guaranteed simply because a system is equipped with classical “symbolic” machinery. Instead, symbolic fluency should be evaluated through behaviours, whether these behaviours involve interactions with interlocutors or simply demonstrate improved internal reasoning. This can be measured by inspecting a set of graded traits, such as receptiveness to new convention, the ability to construct new conventions, and demonstrated understanding of the meaning behind syntactic maneuvers. Because optimizing directly for behaviour is increasingly feasible, we argue that the key to developing machines with human-like symbolic fluency is to optimize learning-based systems for these symbolic behaviours directly by placing artificial agents in situations that require their active use. Human socio-cultural situations are perhaps best suited to fulfill this 9 Symbolic Behaviour in Artificial Intelligence requirement, as they demand the complex coordination of perspectives to agree on arbitrarily-imposed meaning. They can also be collected at scale in conjunction with human feedback, and hence allow the use of powerful contemporary AI tools that mould behaviour."
The Future – September 8, 2024 Kristin Houser Story by Freethink
In a new paper published in Nature, a team of British and Canadian researchers fine-tuned a pre-trained large language model (LLM) — the kind of AI behind ChatGPT — on a dataset of Wikipedia articles.
They then pulled a segment of text from the training dataset (the Wikipedia articles) and prompted their fine-tuned LLM to predict the next bit of text.
They repeated this process until they had a trove of synthetic data as large as the original Wikipedia dataset.
They then fed the synthetic data back into training the model and repeated the process, fine-tuning the AI and then using it to generate more synthetic data for training. After nine rounds of this recursive training, the AI was producing pure gibberish.
It’s possible that allowing even a little synthetic data into an AI’s diet could have a negative effect on its output.
* * *
Wisdom creates a portal between states of change/energy. Each state, agency and dimension acts as a filtering lens between micro and macro patterns of energy in motion that when together, afford a presence within a quantum field of virtual potential and probability (QFVPP).
Wisdom views outer reality as a symbolic reflection of a greater and more vast reality within.
Consciousness represents an unknown about which patterns of energy in motion appear to emerge out of nowhere. Theses patterns of energy create and participate in a cycle of never-ending transition and transformation that appear to be both tangible and intangible. Knowledge, empathy, understanding and love invite it in, wisdom gives conscious awareness structure (form).
Wisdom stands at the portal of both lesser and greater forms of opinion, knowledge and an understanding that all things change all the time.
Agencies when appropriately patterned and networked, i.e. designed, cast themselves forward and expanding within the realms of greater Truth, Beauty and Goodness. Those of lesser degree are harnessed in chains of repetition.
No comments:
Post a Comment