
Researchers say that a person’s intelligence plays a bigger role in their computer proficiency than previously believed, so much so that practice alone may not be enough to ensure ease of use.
A new study has found that general cognitive abilities, such as perception, reasoning, and memory, are more important than previously believed in determining a person’s ability to perform everyday tasks on a computer.
“Our research findings are the first clear proof that cognitive abilities have a significant, independent, and wide-ranging effect on people’s ability to use a computer. Contrary to what was previously thought, cognitive abilities are as important as previous experience of computer use,” says Aalto University’s Professor Antti Oulasvirta, who studied human-computer interaction extensively with his team.
The researchers emphasize that these findings raise concerns about digital equality. As user interfaces have grown increasingly complex, practice alone is no longer enough, cognitive ability is now a key factor in successfully navigating digital environments.
“It is clear that differences between individuals cannot be eliminated simply by means of training; in the future, user interfaces need to be streamlined for simpler use. This age-old goal has been forgotten at some point, and awkwardly designed interfaces have become a driver for the digital divide. We cannot promote a deeper and more equal use of computers in society unless we solve this basic problem,” Oulasvirta says.

The research was carried out jointly by researchers from the Aalto University Department of Information and Communications Engineering and the University of Helsinki Department of Psychology.
Age is still the most significant factor
Test subjects belonging to different age groups participated in the study. They were given 18 different tasks, and the researchers observed how they performed. The tasks included software installation, navigation, use of spreadsheets, and filling in forms.
The estimation of cognitive abilities is based on a standardized and well-established measurement method in the field. This is the first-ever study to measure users’ actual ability to perform daily tasks on a PC, as previous studies have relied on participants self-assessing their abilities via questionnaires.
“We know that people may have a false sense of their own abilities, which is why it was important to measure how well they actually performed in the tasks,” says University Lecturer Viljami Salmela from the University of Helsinki.
The study provided a wealth of new information about the most vital cognitive abilities. While the speed of processing is important in computer games, it is not emphasized in everyday tasks on the computer.
“The study revealed that, in particular, working memory, attention, and executive functions stand out as the key abilities. When using a computer, you must determine the order in which things are done and keep in mind what has already been done. A purely mathematical or logical ability does not help in the same way,” says Salmela.
According to Oulasvirta, there are also major differences between applications and user interfaces. “For example, the most important thing in using a spreadsheet program is practice, while linguistic capabilities are highlighted in information retrieval tasks and executive functions are emphasized in online banking.”
“However, the research findings also show that age remains the most important factor in how well an individual can use applications. Older people clearly took more time to complete their tasks, and they also felt that the assignments were more burdensome,” says Salmela.
Reference: “Cognitive abilities predict performance in everyday computer tasks” by Erik Lintunen, Viljami Salmela, Petri Jarre, Tuukka Heikkinen, Markku Kilpeläinen, Markus Jokela and Antti Oulasvirta, 16 August 2024, International Journal of Human-Computer Studies.
DOI: 10.1016/j.ijhcs.2024.103354
Never miss a breakthrough: Join the SciTechDaily newsletter.
Follow us on Google and Google News.
63 Comments
Is that lack of intelligence with the user or the developer who is constantly changing and jumbling UIs to “make it better”?
Beware of anything smaller than its user manual.
Developer, or, rather manger who promotes that.
Most of the time “better” is based on personal preferences and visio, not on average human physiology and psychology. Semiotically-challenged people try to invent “Modern UI”.
ERRONEOUS HEADLINE! Memory issues do not indicate lack of intelligence! Get your facts about cognition straight!
Yes! This! 👍
Yes indeed. This explains a lot.
Who invented the computer
people who are older. the first one I saw sixty years ago took up a two very large rooms. Most retired people have been using them since the middle 80’s what make things difficult is bad eyesight and the compact nature of the device. not mental decline. Someone needs to address these problem before some of your poster retire.
Huh?
I find that stress operating in ‘on stage “, situation and when there’s a complex exception to the rule, and where I worked last it was like every fourth patron interaction, was that way . So when dealing with multiple patrons at once, with each being 3 tasks , sub menus , different web sites and getting back to where I was is like playing 3 tier
Chess against 2 opponent’s. Ugh! I sometimes get lost, and there were no computers when I was in school, just the punch card system.
Its usually take me less than 5-10 min to easily use a new interface.
So intelligence will help with this.
“Lack of Intelligence” I agree that lack of intelligence may be why certain individuals may not be ever computerchiv savy, however, not that you are inferring those individuals are Stupid but you might add self achivement in all other forms are still attainablel.
I think loss of features with this mind set will only increase cognitive decline.
I think the way we interact with technology in the instance of this article should be as follows.
For older people… just as there are phones with big buttons… same strategy for the older population where an AI chat assistant you can type to what you want done if you can’t figure it out and it does it like clicking a button for you upon your request.
Just because older humans can’t cope because of cognitive decline doesn’t mean younger people with the cognition to handle and cope should be punitively punished for the decline of older humans.
More button’s, feature’s and option’s please.
If you can’t cut it, maybe get off the computer instead of making it worse for those that can.
“More buttons, features, and options, please.”
Not
“More button’s, feature’s and option’s please”
Grammerly can be your friend.
Considering some of the spellcheck-isms found on this platform, Jenna O’s boo-boo wasn’t that bad.
Your final comment is ageist and snarky and undercuts any suggestions you have to make computers more equitable. When you no longer can rule the world and your eyes fail or technology advances beyond your capabilities, you might want others to show you respect and care.
💯%! Well said
Long time computer programmer here, have been designing, implementing, launching, maintaining computer applications, at planetary scale The “research” in this article is poorly conceived and the conclusions paint a very limited picture, which maybe not surprisingly looks like clickbait.
Yes, cognitive abilities have a role, just like in pretty much anything we do. A common issue with computers, phones included, is that both the operating system and applications require from the user to have at least a fundamental understanding of how things works under the hood. This is why I pick computer things so much faster than my very smart partner – they simply do not have an intuition needed to build a comfort zone quickly.
For example, look at Google Meet. A very stable and reliable communication platform, despite what Google haters may claim. It has a fundamental flaw that it exposes, at a user level, that a call / meeting is a basic construct. One then joins a call/meeting and leaves a call/meeting. This may work for a business setting, but when you simply want call someone, the concept clashes with how we think about a call. Google has started to fix this and hides the call/meeting object under the hood, but they made a mistake to expose a programmer’s mindset to users.
Such examples exists everywhere. Database structures / data organization, caching, exposing internal operations to users, all of this makes it hard for a regular person to build an accurate-enough mental model to become a proficient user.
The second dimension, making it all so much worse, is horrible UIs and the need for UI designers to chase the latest fad. User journeys are poorly defined and not implemented smoothly, the terminology used comes from the vocabulary of programmer’s and so on.
This being said, some compassion is needed for programmer’s as well. They face an almost impossible challenge to be able to connect the abstract thinking needed to design computer programs, and the need to grasp how an end user actually thinks about what they are doing. It’s a bridge that is so hard to cross – great UI designers and user researchers needed to solve this problem are rare and very expensive, so everyone ends up suffering.
AI may be able to help here one day, helping with understanding users and their journeys reducing some cost. I doubt sincerely it will be a magic bullet.
Very well said (another long time software developer here). I do think age and set expectations of “how this dang thing should work” play a part. Our biases in how things should be organized play the biggest part in our ability to understand how others have organized.
Analyst programmer in Indistrial computing, now implementing UX and UI.
Cognitive load and ability to see patterns is the most limitting factor and is linked to g factor.
We adress that by exposing the structure to the user in a clear way and by making the tasks context free, to lessen cognitive load.
That means we think as much as we can on behalf of the user and when we can’t, we make it easy for the user to have only the information he needs.
That requires a monstruous effort on our part, but that makes our system much easier to use, even for less intelligent folks.
I am pushing to bring back affordance, because some folks have troubles identifying buttons, even if they are their own distinct appearance and contain verbs. But modernity calls for abstract UI. (It’s a real wideslread regression)
And when a user does something because he still understand how the software work, we make it work if it will not break other people configurations. So, some things may be extra weird, because it’s logical to some people. The good news is that it can ve safely ignored. It’s just nice for people who would do a fuckup and have the system correct it. And when we can’t, we tutorialise through explicit possible resolutions.
So, thinking for the user.
We are now pondering AI to provide even more of pre-thought things.
This article is nothing but a self-serving comment to the so-called smart computer community who thinks that they know everything. The truth is the only things they know is what they read in a book. When it comes to real world practical application they are less than useless. I can say this because that’s been my experience with almost every computer person I’ve met. I say almost every only to not sound absolute. The truth is it’s been every computer person I have met.
Another reality is that the computer is what is making people stupid. The computer does all are thinking for us. Younger people don’t know how to read and analog clock, they don’t know how to converse, they don’t know social interactions, their short-term memory is not existent, they do not have math skills, and the list goes on and on. To use a computer does not require intelligence but it does require being willing to give up your own intelligence. That is the harsh reality that these genius computer people can’t understand.
This comment highlights an issue very well. People cling to obsolete techm and refuse to adapt to better one (down to calling people who do elitist), but only when it comes to computers. Why would modern people who were born after ubiquity of digital clocks need (or even want) to read analogue clocks, a thing that existed due to limitations of our tech centuries ago?
You ride your “computer people are dumb” horse quite well, but how good is your knot-on-a-string counting skills? How good are you at utilising clay and stylus for record keeping? Or even how to properly input data with a punchcard? I’m willing to bet the answer to all of those is a bit fat “no”. And yet you do not consider those to be a flaws of yours, because in your mind those are obsolete, replaced by better and more convenient alternatives…
Hi
Not being able to spell shows you aren’t intelligent enough to be commenting on this topic. You just like getting your brainless 2 cents in so people will here you. You must be a millenial phone idiot that watches videos instead of reading. Keep your mouth shut, you’d be better off.
Maybe if he were here he’d be able to HEAR your comment.
Hear you, not here you
I find this article a bit odd—most people I know use computers daily, whether through a smartphone or a PC. I’m curious where the researchers found their test subjects.
Beyond that, the article contradicts itself. It starts by stating that age is the most significant factor in digital performance, then pivots to emphasize intelligence, which it defines through cognitive traits like memory and attention. That’s misleading.
Memory and attention can be affected by stress, trauma, or developmental factors—they aren’t fixed indicators of intelligence. Struggling with tech often reflects a mismatch between system design and user needs, not a lack of capability.
If we want digital equity, we need to stop blaming users’ brains and start designing smarter, more intuitive systems. Better UX, not IQ, is the real solution.
Like in every other area, “equity” means reducing everyone to the lowest common denominator.
What we need isn’t to generally make user interfaces so simple that 3 year olds could use them, but to make them as powerful as possible for the performance weighted average of your target demographic.
The idea that the least computer affine users should use the same tools as specialists or enthusiasts needs to die, since in the end it makes software worse for pretty much everyone. A competitive ecosystem of tools aimed at different demographics is always preferable to “default software” enforced by a tech monopoly.
So long as there is a pathway for each system to communicate with each other, yes. To see how difficult that is, try running a simple sentence thru translation software…in English, then Hindi, then Spanish, Chinese, German, etc. After running that gauntlet, see if you have an understandable sentence.
Certainly some good points made regarding the potential pitfalls of such a study but it seems like very few bothered to even do a cursory reading of the study’s methods, results and conclusions. The methodology and results are technically valid for the most part, except there’s really nothing to show that similar results wouldn’t be found on non-computer procedural tasks. If anything, the correlation between IQ test results and ability to complete computer-based tasks demonstrates that they essentially measure the same thing, just that one is performed via computer interface. A set of non-computer tasks would be necessary as a control to conclude that the computer element is a significant factor. But I imagine that would also correlate similarly to IQ test results.
But this supposed cognitive link to success with computer interfaces isn’t even the primary factor, or even close to it. Age was by far the most significant factor in completing the tasks. It is in this aspect where the study’s most major flaw lies, and is actually an intentional feature to distinguish it from similar studies. To successfully complete a task in this study, it must be completed in a certain time, with no timer other than a 30 second warning. That is almost always going to put older people at a disadvantage, and in a manner that is completely unrelated to computer interfaces per se.
So indeed the study does have serious problems, and utterly fails to demonstrate what the researchers claim it does. But the reasons for this are specific to the particular design of this study and most importantly the lack of controls to distinguish how these results would differ significantly from similar non-computerized tasks.
By the way, my comment was not intended to be a reply to a specific comment. Apparently I have difficulty with computer interfaces, except this is most likely the result of a bad UI since the ads tend to create interference and cause inadvertent taps on comment replies. And nowhere does it show clearly that I was replying to a specific comment vs the article as a whole. This example alone demonstrates more of a link between task success and UI design than the whole of the study in question. (Supposedly the results could inform changes to UI design for those with cognitive deficits)
Disagree with the conclusions presented in this article and think it’s conclusions conclusions as presented here will lead to misunderstandings.
They never said the users had never used these things before, but previous experience was a factor included in the study and they found that experience did not predict how well the subjects did, at least not as much as they expected.
Aside from the headline and stinger text, which are usually just written to attract attention, the article says “cognitive abilities” not “intelligence.” They’re emphasizing that cognitive ability had a higher impact than expected, which is a new finding, while age being a predictor is already long known.
They also never said people should get higher IQ, they literally said “in the future, user interfaces need to be streamlined” and you then say they’re blaming user’s brains? They literally say we need more well-designed systems and better interfaces.
So I guess we need smarter, more intuitive article writers too, since literacy is clearly low.
The first comment I’ve seen that makes sense, displays great sentence fluency and doesn’t contain misspelled nor misleading words. Clear, concise and to the point! Some comments make me scratch my head and wonder what they’re even talking about. I’m glad I was able to understand yours and see your insight clearly.
Yes I agree! Shows lack of intelligence of author. IQ is multifaceted with other aspects including verbal processing and comprehension. So someone could score high on these and have an average or above average IQ but lower on the other components, because weakness in other areas, not an overall lack of intelligence. This is particularly important for learning differences such as dyslexia which show an uneven profile across different components of the test, with an average or above average IQ. Even giftedness often presents with uneven scores. Essentially the authors were proposing computers are tailored to a particular learning profile, which makes them less accessable to others, which is a very important point given people with learning differences such as dyslexia often benefit from technology and assistive devices, but yet it may not be tailored well to people with deficits in processing speed and working memory, which is often coexist with these learning styles.
Edit. I meant the authors of this terrible scitech article not the authors of the actual journal article which made much more sense and never once mentioned intelligence only certain cognitive abilities. These are entirely different things, someone can have a high IQ score and still be weak in certain abilities such as working memory, then potentially struggle to use a computer due to this.
I’ve seen people who are dumb as a sack of hammers operate computers and smartphones without any trouble, simply because they grew up with the technology. Time-travel DaVinci to this day and age, and see how long it would take him to figure out a smartphone without any guidance.
Probably a couple of hours, to be honest. A guy who designed functional machines that were impossible to build with the tech level his society was on, would most likely have no problem figuring out existing tech today.
Very little about smartphone tech is intuitive. Without a familiarity of icons and their functions, he’d be lost. LOL.
Would he figure THAT one out?
One has to be willing to try. I compare your example to becoming a bookworm with little interest in reading.
Ha! Well said.
Might it be that current “modern” GUI design is becoming increasingly inscrutable? No clear visual cues separating the different UI elements from each other. Some functions are now only an abstract icon and hopefully a tool tip. Some functions that were at least easily discoverable in the past now seem “vanished” and require online research to find out how they scuttled it away in the interface. Modern GUI is very much form over function. What do users need discoverability for? Can’t we see how pretty this looks?
Did they just say older people have lower intelligence in that subheading summary?
Silly article did it take AI to discover this .
On the contrary, I think this article is a door to something bigger, tying the worlds of the new and old generations.
Installing software is a task? Kinda waiting for longitudinal studies with DGAF, revulsion sidechannels, and design dystrophy consideration baked in, maybe in J. Outside World Detonation.
Kind of obligated to look at TFA and draw a line of fixes across all Arch Distros (jk, Fedora spins, Kali and a bsd, maybe an RTOS,) kind of obligated to free youth from emotional labor on OS with no damn emotional sense. The ‘humans are s— at Markov State Comprehension without formative drills’ conclusion seems wild.
Also me: Yass I need a handwritten Chinese keyboard and Telugu input options. I can save OLED power draw by masking less fine type on a hand selected marching Bayer Mask? Yes please. Why is my mouse jittery if CPU is at 12%?! Can I emit cookies so my Square Enix game world drives ads where I permit them, so I get recipes for local drops?
How did you gain your knowledge about computers?
In the auto industry, drivers can just talk to their car because the visual interfaces are too confusing. Developers have a hard time seeing the perspective of the user. Just test the user experience on your grandparents and don’t stop refining until they can use it. Then you will have a great user experience and a successful product.
So how easy would it have been to learn your mother tongue without your mother, family, or tribe? When we discuss such taken for granted “skills” we are automatically assuming that the so-called “unintelligent” user is also surrounded by a readily available environment of patient, longtime, savvy, but still current users. Such is not typically the case with the generation who grew into adulthood without ever being exposed to computers in school, work, and family. I literally had not even been exposed for the first time to computers until my thirties when on a blue collar (outdoor) job that only used them for 10% or less of the time in the course of a shift. I also switched to chemical lab work where computers were more prevalent but not the dominant flow path of the job. Oh, and I was tested by multiple intelligence tests and placed highly (again and again) on abstract industrial, logic, problem solving, reading comprehension skills tests. But, I refer to the admission this article made about innate intelligence not being enough alone to interface with a computer and that being older, having attention deficits, memory challenges, and too much elapsed time entering data singled out a user as being cognitively subpar for the computer and as a result branded said user as “ipso facto unintelligent”.
It is often the program that is not user friendly. This was much to the chagrin of the DOGE techwits who were baffled by Social Security’s COBOL programming on the government computers whereupon it was discovered after much public outrage and embarrassment as the issue became undeniable. Yet still, DOGE plows on forcing a technology protocol on government employees and the citizens they service while cutting down on the availability and interpretive role that the fully human government agents have always provided to clients of all cognitive levels. What is revealed is that coders are not ready to serve the public in a capacity of competence. It is the folly of such techwits to presume otherwise.
> So how easy would it have been to learn your mother tongue without your mother, family, or tribe?
That is a False Equivalency at its finest. Language is one of the most complicated and esoteric things one can learn, even sciences are easy, as they at least have structure and logic behind them, while languages are entirely arbitrary.
Meanwhile computers, since at least 90s (which is over 30 years ago, need i remind you), have very approachable and “self-describing” interfaces, that one can learn and comprehend by just sitting in front of a computer, reading and clicking on stuff, and seeing what happens. You know, not unlike how toddlers learn to interact with the surrounding world.
> When we discuss such taken for granted “skills” we are automatically assuming that the so-called “unintelligent” user is also surrounded by a readily available environment of patient, longtime, savvy, but still current users.
Which is a correct assumption. Sure, 40 years ago, in the 80s, when only select few had any experience with computers, and a computer manual was an encyclopedia-sized binder of thick computer jargon – back then that assumption was correct. But we no longer live then. We have better manuals, interactive learning, and pretty much everyone had at least some basic knowledge about PC. For starters, most people have a small PC with severely dumbed down UI in their pocket. One that is also connected to the internet.
I’m sorry, but all your arguments come straight out of pre-2000s mindset and do not reflect the realities of people approaching computers in modern days, nor modern day computers themselves. These days, all it really takes is having an attitude of “i want to learn how to use this” while approaching a computer to be decently skilled in using one in a matter of days, without need for any mentors (which is a lot more than one could say for a lot of skills peopl would expect from an average adult: cooking, driving, house maintenance – none of those are easy to learn and all of them require a lot of dedication and external to the process learning material and/or teachers). And people still fail to use computers despite that, hence this study, in my opinion
I have been asked by a University student “why do they need to memorize anything” when we have a computer in our pocket?” Needless to say, these same people are great with technology, but they don’t have a store of knowledge about much except popular culture. Many Americans have NO intellectual curiosity, no matter what their age is. It is often a source of derision. We are leery of people who learn for the joy of learning, they must want to feel superior. That attitude is a traditional part of American culture.
I am 75, male and care very little about technology. But, even I know that most computer problems are directly caused by operator error. I can find and clear most problems by paying attention, accessing tutorials, and simply being patient, methodical and careful. It is an electronic device, it has no personality or agency; we are the problem!
I will admit that I don’t care about the features of my smartphone, it is just a way to keep in touch with my Husband, we generally text each other, and we don’t give out our cell phone numbers. We have a VOIP phone with an answering machine for communicating with the outside world. We both find voicemail to be intrusive and annoying, much like Social Media.
I was 60 when we changed to streaming only content, no OTA TV, no cable, no satellite. This is a typical old guy move; I will never willingly watch a commercial in my life, so I have chosen to use platforms that make that possible. It frees me from talking mucous, bears defecating in the woods, and other stupidity.
This self described luddite is currently exploring VPNs to give me better access to the world outside of America. I want to see content that isn’t dominated by American Corporate Media’s constant propaganda. I want to learn new things, the minute that I stop learning I’m as good as dead!
The problem with computer instructions is that most directions say what to do not how to do it.
Maybe you should find media that’s not “dominated by American Corporate Media” which you can do without needing a VPN. It is possible.
thank you
I am 79. my 76 year old sister and her 78 year old friend were trying to find a certain video using Google search
They used voice. Me typing it in. I found it quickly. They never.
As a zillennial (?) I find that my elders are obviously intelligent, but they have a harder time inferring general rules about a GUI and rely more on their experience with the specific GUI they’re using. I can figure out what’s tripping them up just by playing around with the interface, even if I have no experience with it.
It’s sort of like my generation has mastered navigational instruments while theirs is stuck because the tall tree that marked “north” fell down.
What I published in BYTE Magazine almost 30 years ago:
=====
The conclusion I have reluctantly come to after more than 20 years of software development is this: Excellent developers, like excellent musicians and artists, are born, not made. The number of such developers is a fixed (and tiny) percentage of the population. Thus, the absolute number of such developers grows very s lowly. At the same time, the demand for them expands rapidly due to the world’s increasing use of, and reliance on, software.
The situation is worse than it appears. Some of these innately talented people never go into the computer industry. Many who do never develop their full potential. Others become prima donnas, demanding large salaries and extreme benefits. Or they become “cowboy programmers,” shooting from the hip and holding teams, projects, or entire companies hostage. A few burn out and leave the field. Of those left, only a fraction meets the requirements for your project.
This is not to slight the decent, talented software engineers, the ones who study hard and work hard at developing and maintaining their skills. Indeed, if not for them, we wouldn’t have a software industry at all. But even they can’t meet the demand, and their efforts are undermined by the mediocre (or worse) programmers. (“The Real Software Crisis”, BYTE, January 1996)
=====
The role of talent in software engineering has been known for half a century. It’s been ignored because it seems elitist, though we have no trouble recognizing its role in art, music, math, and other fields.
There are many, many types of UI, especially in OS’ like Linux.
The conclusions have no validity because they don’t look at the UI requirements at all, they look at a specific UI and generalise.
If you want a proper test, you must test again a range of interfaces and see if ability or ease of acquisition of skills is invariant as they claim, or whether the skills needed depend on the UI used.
This was already blatantly obvious, but it’s nice to see more evidence to support my longstanding hypothesis.
Computer usage is mostly reading and understanding the information on the screen. It’s so easy and simple. Yet the dumb people I know are always so confused, whether on or off the computer.
How many of us have gotten a call from a family member to help because we’re “good with computers”, and then fixed a problem for them they you’ve never seen before? How many of us have traveled all the way over just to type their problem into Google and then click where it says to click? Like, just because I’m a software engineer doesn’t mean I inherently know how to export your Turbo Tax data, Uncle Mike. I clicked the menu and looked for the word Export. Can’t you read, Uncle Mike?
Oh, and my favorite is when they call for emergency help because it’s “not working”, then you drive over there only to find out they forgot their password…again.
The solution to forgetting passwords is… require fewer passwords. As B. Webster pointed out earlier { 30 years earlier… yikes…}, not enough talented people are interested in software design to make tech accessible and secure. The folks you call dumb aren’t low I.Q., they’re just overwhelmed.
In other News: water found to be wet.
Intelligence is a term that covers a wide range of partly measurable skills. None of the skills they measured included any social skills. That itself is telling. And the need for executive skills they discussed are easily reduced by helping users understand where they are in the process. But mostly the programs are designed by people who understand the process intimately and so struggle to conceive of the issues infrequent usres have.
The US military has long established that an IQ of 83 or below is incapable of adequately comprehending any type of skilled work. So, if specific requirements are scaled, its easy to see how every task demands a certain level of intelligence.
What is it about changing a UI for no apparent reason? Take a recent example: Amazon had a decent system for rearranging items on a wish list. It worked fine on desktops but was somewhat broken on any iOS, even when displaying the desktop version. It consisted of clicking “move to top.” On iOS, the top 10 items had that option unless an item was deleted or moved to a different list. I don’t know why it was necessary, but I was in the habit of keeping unnecessary things on my lists so that I could delete them in order to show “move to top” again. IOW, I made it work.
Recently, Amazon changed how to re-order a list. Now there’s an up caret and a down caret. When properly activated, the wish list item will float and can be moved up or down. It’s difficult to activate that, but it’s very easy to “copy the image” of one of the carets. When I manage to activate it correctly, I still have to drag the item to where I want it. It’s a PITA. Sure, there are workarounds for reordering items. Sometimes, I open the links of what I want to move to the top of a list and then add them to the list. Other times, I pop them into my cart where I can jump through hoops by “saving for later” and then re-adding them to the cart in the order needed to compare them. I’ve considered making a throwaway list to pop the items into in the order I want to see them, but I haven’t yet.
It’s pretty stupid, right? Why did the Amazon engineers, in their vast need to keep busy, break this thing rather than fix the mostly workable thing? I dunno. Is there They didn’t make it more usable or easier to handle for anyone with any IQ, so was there any point to it? There also seems to be no way to communicate with Amazon about this because they default to asking what device and browser I’m using and even what keyboard I use. I’m not unintelligent, but I’m struggling with the mechanics of the change as well as being gob smacked by the sheer stupidity of the thing.
This study makes the error of interpreting a narrow skillset as intelligence. Intelligence doesn’t just apply to the ability to use a computer.