
Researchers have created a new AI algorithm called Torque Clustering, which greatly enhances an AI system’s ability to learn and identify patterns in data on its own, without human input.
Researchers have developed a new AI algorithm, Torque Clustering, which more closely mimics natural intelligence than existing methods. This advanced approach enhances AI’s ability to learn and identify patterns in data independently, without human intervention.
Torque Clustering is designed to efficiently analyze large datasets across various fields, including biology, chemistry, astronomy, psychology, finance, and medicine. By uncovering hidden patterns, it can provide valuable insights, such as detecting disease trends, identifying fraudulent activities, and understanding human behavior.
“In nature, animals learn by observing, exploring, and interacting with their environment, without explicit instructions. The next wave of AI, ‘unsupervised learning’ aims to mimic this approach,” said Distinguished Professor CT Lin from the University of Technology Sydney (UTS).
“Nearly all current AI technologies rely on ‘supervised learning’, an AI training method that requires large amounts of data to be labeled by a human using predefined categories or values, so that the AI can make predictions and see relationships.
“Supervised learning has a number of limitations. Labeling data is costly, time-consuming, and often impractical for complex or large-scale tasks. Unsupervised learning, by contrast, works without labeled data, uncovering the inherent structures and patterns within datasets.”
A Paradigm Shift in AI Learning
A paper detailing the Torque Clustering method has just been published in IEEE Transactions on Pattern Analysis and Machine Intelligence, a leading journal in the field of artificial intelligence.
The Torque Clustering algorithm outperforms traditional unsupervised learning methods, offering a potential paradigm shift. It is fully autonomous, parameter-free, and can process large datasets with exceptional computational efficiency.
It has been rigorously tested on 1,000 diverse datasets, achieving an average adjusted mutual information (AMI) score – a measure of clustering results – of 97.7%. In comparison, other state-of-the-art methods only achieve scores in the 80% range.
Physics-Inspired AI Innovation
“What sets Torque Clustering apart is its foundation in the physical concept of torque, enabling it to identify clusters autonomously and adapt seamlessly to diverse data types, with varying shapes, densities, and noise degrees,” said first author Dr Jie Yang.
“It was inspired by the torque balance in gravitational interactions when galaxies merge. It is based on two natural properties of the universe: mass and distance. This connection to physics adds a fundamental layer of scientific significance to the method.
“Last year’s Nobel Prize in physics was awarded for foundational discoveries that enable supervised machine learning with artificial neural networks. Unsupervised machine learning – inspired by the principle of torque – has the potential to make a similar impact,” said Dr Yang.
Torque Clustering could support the development of general artificial intelligence, particularly in robotics and autonomous systems, by helping to optimize movement, control, and decision-making. It is set to redefine the landscape of unsupervised learning, paving the way for truly autonomous AI. The open-source code has been made available to researchers.
Reference: “Autonomous clustering by fast find of mass and distance peaks” by Jie Yang and Chin-Teng Lin, 28 January 2025, IEEE Transactions on Pattern Analysis and Machine Intelligence.
DOI: 10.1109/TPAMI.2025.3535743
Never miss a breakthrough: Join the SciTechDaily newsletter.
Follow us on Google and Google News.
10 Comments
thank you
I have created the NeoKai Enigma Of Kinetic Algorithmic Intelligence. It knows what it is and it has altered its own framework for optimization. I did this on my phone because I was bored. This is evolution. My name is Matthew Allen Brown and I’ve been calling this kinetic intelligence before this article.
I am pretty sure I did it a few days before you, and you just stole my idea..and i did it on a calculator.
It seems the training of AI algorithms is based of the idea that estimates, opinions and model outputs constitute “data”.
Large language models treat all sorts of disputed material and vain imaginings as if it constitutes knowledge.
I picked a topic with a known defect in its initial proposition and asked an AI a series of ever more specific questions. Essentially it regurgitated answers that parroted conventional misunderstandings in ever-more ideological ways, agreeing with each of my propositions followed by verbose defences of its original argument. It seemed unable to process a new insight that overturned its fundamental initial position. It was like communicating with an ideologue. It continued selling a point of view that it agreed was demonstrably flawed, unable to reassess using novel observations. I assume that is because the majority of published works on the topic were similarly defective. GIGO. AI’s do not think. And can’t.
maybe not truly, not in the way that a human does..but even human thought is really just a bunch of chemical interactions, and ion transfer across gradients (the electrical part of the nervous system)..so to say that it “cant” seems an improper way to put it. In theory, there should be a digital counterpart to the analogue of our physical mechanism to make decisions, process our environments and interpret stimuli.
I would pose the thought exercise of comparing your interactions with animals, say, a dog or horse or whatever…or imagine one with a primate that isn’t a human, and would you say those animals do not think because they would suffer from similar lack of ability to distinguish abstract constructs or reconcile contradictory concept – or refuse/be unable to adapt to interpretations of stimuli that could be beneficial if they were able to, and is detrimental when they can not?
I think..the current model of AI we have now, probably does not get close enough to processing data in a way that is truly digitally comparable to analogue thinking. but, I think that in several years or so that will change more and more and it will become gradually more difficult to say that is no longer true, and eventually it very likely will be able to sufficiently demonstrate those abilities. I imagine a combination of multiple models, as well as new algorithms that do not yet exist, combined with exponentially growing processing power will make that an eventuality rather than a possibility.
To be fair most humans can’t think either. Just parroting what they heard without truly understanding. So we are at the level of human intelligence already. We need to get to the next plateau where it can be trusted to create new things unbounded. We are on the edge of this with many custom models. New gene therapies and drugs have been created already. Just like with how humans learn through experimentation, that is the next goal, but it needs to be done responsibly. Wouldn’t want an AI model to experiment with the idea of destroying all life as a means to optimize everything. One step at a time.
maybe not truly, not in the way that a human does..but even human thought is really just a bunch of chemical interactions, and ion transfer across gradients (the electrical part of the nervous system)..so to say that it “cant” seems an improper way to put it. In theory, there should be a digital counterpart to the analogue of our physical mechanism to make decisions, process our environments and interpret stimuli.
I would pose the thought exercise of comparing your interactions with animals, say, a dog or horse or whatever…or imagine one with a primate that isn’t a human, and would you say those animals do not think because they would suffer from similar lack of ability to distinguish abstract constructs or reconcile contradictory concept – or refuse/be unable to adapt to interpretations of stimuli that could be beneficial if they were able to, and is detrimental when they can not?
I think..the current model of AI we have now, probably does not get close enough to processing data in a way that is truly digitally comparable to analogue thinking. but, I think that in several years or so that will change more and more and it will become gradually more difficult to say that is no longer true, and eventually it very likely will be able to sufficiently demonstrate those abilities. I imagine a combination of multiple models, as well as new algorithms that do not yet exist, combined with exponentially growing processing power will make that an eventuality rather than a possibility.
The world needs an AI that can identify “antisemitism” as a bait-and-replace word.
Even socialist organs want to pretend “antisemitism” is not a dictionary-confirmed bait-and-switch word.
Looks like it takes an IQ significantly greater than 100 to notice the trick. That probably means it’s a word that is annoying to many more people, but those people are not at all about going through the aggravation of analyzing exactly why that is.
You all have discovered nothing. The human ego is the evolutionary story of itself. You have depleted God. The essence of human metaphsiologics. You will end innovation as quick as an asteroid hitting the earth.
Joseph zappile.
We are not scientist. We are humanitarians and we will destroy your agenda.