why #bigtech really wants to destroy human agency (part the second) (or more on the “european HUMANISING union”)

someone once argued that it was better to be hated for what one is than loved for what one is not.

as with many of these nicely turned phrases, the premise is necessarily incomplete.

and, as with my projects on #intuitionvalidation, we face the same falsity of dichotomy, this time from the #it- and #ai-#tech industries.

they argue it’s either humans or machines. they argue there’s no alternative future to the one they argue we must be utterly horrified about. and they say, ultimately, human goalposts can never be moved:

www.secrecy.plus/hmagi


examining a false premise

yet let’s examine this premise more closely. the coaching industry makes today’s generations of humans measurably better than previous ones in all sorts of business and related fields. sports science gets the very same species to hit higher and higher physical and mental records every year, both on the track & pitch and off whilst training. artists paint with ever more astonishing technique: paints and brushstrokes and digital wisdoms history has truly never imagined before (when, that is, #ai isn’t stealing their #intellectualproperty). then, actors become figuratively, literally, and visually more adept at tugging our emotions and telling new truths. and finally, writers deliver stories we never thought at all possible, and sometimes in volumes with quality we never considered practical.

in all manner of technologies then — high and low both (a pencil of hyper-realist art, after all, can be considered a technology, too (and perhaps any of its uses should be considered thus)) — humans ARE having their goalposts moved amazingly. in all the sectors mentioned we are overcoming our previous selves: but not aggressively, not competitively. in grand solidarity, first and foremost; solidarity above all, even when competing against each other. solidarity where the professional and focussed amateur know the work that’s being put in re such outcomes.

examining the lies — there’s no other word, unfortunately — of the majority of #it and #ai promoters

now let us examine #it and #ai. in none of the above examples are humans made less relevant. in the vast majority of incidences of the industries of #ai and #it i now debate we humans are being purposefully and choicefully automated out of choice and purpose. they say change is inevitable. they don’t say its nature isn’t. but it isn’t. and that’s a real problem.

we need to be clear: it’s easy money that’s driving the desire of #ai and #it promotors to destroy so massively the human agency that makes life worth living.

because the power the owners of #it and #ai companies wield means that their choices become ours, even though in other sectors they still ain’t been our choices.

changing humanity for the better by using machines to augment humans not automate their owners’ wallets

in an earlier post today i discussed how we had progressed from world war to the european economic community to the european union: soldiers … traders … humans once more .. and perhaps humans in a way that increasingly never before.

it should be rebranded to the #ehu, you know: the “european HUMANISING union”. not just for standing firm against russia in ukraine; not just because war in the rest of europe is generally inconceivable; not only because #industry5 and the properly #circulareconomy are being delivered faster in #europe than anyone cares to elsewhere, and certainly in better faith than in other places; but also because the battlecry that now, clearly, was #gdpr during its first launching and moment of truth is moving us all to a generational shift in #it and related.

remember #search? it was the last time the big #techcorporations successfully ripped off copyright owners. generative #ai — at least in the european HUMANISING union i have just conceptualised, and in this post-#gdpr period — will not be getting such an easy ride.

this i can promise you.

and it makes me absolutely overjoyed.


relevant online whitepaper:

www.sverige2.earth/overview | on delivering happy clever societies


how to save #chatgpt-x from its founders

i just saw an example of the power of culture over rules & regs when looking to achieve a particular outcome.

a human being removed a box cover and fluffed up some bags of crisps not because they had a rule saying when, but simply because their culture said now.


why conflict in the first place, for goodness sake?

an #ai designed to foreground the functions of machine-approaches to #complexproblem solutioning uses rules & regs always. it will do what you want it do as long as you have told it once. and told it in accordance with the needs of your domain. that is, all its needs.

a #humanbeing made bigger by #tech meantime — as per #film- and #movie-#tech has always chosen to do (the mic making the human voice bigger, the camera increasing the vision of the human eye, and even the stage extending great actors’ capabilities to express themselves powerfully via mise-en-scene) — will always operate better with the unexpected.

on the very human ability to deal with the unexpected

the unexpected doesn’t have to be: but it is. whether because it really was (9/11) or because you’re a newbie (me all the time in almost everything i do), our grand virtues as #humans supported by #machines (in this order), designed primarily to extend our existent virtues instead of deepen existent pockets (both are good, mind — when they coincide; but it’s my thinking the first is a problem to be solved and the second should never be permitted to become a solution in search of the former …), is that the unexpected is what engages us most deeply in life. and therefore what makes us reach our heights, every time.

in truth, it’s the kind of #machines we are if we were: except we’re not. we’re flesh and blood: we forget, only to remember a fabulous idea six months later; we frustrate, only to go on a drinking binge and then after hangover find marvellous beauty lodged amazingly in our heads; we get angry with another human for rejecting our beautifully formed solutions worked and reworked so often … and then after a sulk maybe of days we recapitulate and find an even better synthesis of both.

as #humans, the unexpected is what we are. only when we use #techtools designed to make their design cheaper to build and more profitable to hype, we act more like these #machines ourselves and may appear for a while to lose our capacity to surprise. to be different from machines, that is.

but it’s not true. believe me. an example. i’ve worked deeply in language learning for two decades in a previous life and know exactly what happens when the job of teacher becomes that of enabler; the task is no longer one of acquiring more data; and then, at last, it’s producing what we need as humans with what we’ve already got as thinkers that becomes the real challenge and delight.

and we don’t steal someone’s intellectual property to build an empire, either. it’s just not part of the gameplan.

no.

really.

we don’t.

and how many different types of burgers did #siliconvalley’s stand actually sell in the first place?

meanwhile, #siliconvalley has lately (“last three decades” lately, at that) delivered only one piece of money-making #tech.

when the #newspaperindustry was an industry, we called this “tech” #classifiedadvertising. this kind of #advertising had great virtue, too: to make people want to go to it and buy the products advertised and therefore pay the bills of what was actually very often a #publicservice, journalists wrote the greatest analysis and deconstruction of democratic and anti-democratic players; descriptions of things that were going just dandy and then again things that were going just frankly belly-up; and so finally we’d even get the most beautiful features and reportage that would manifest the world around us with #photography and #words that became #art in incredibly undeniable consonance.

and all of the above was rigorously original content.

#siliconvalley? hmm …

on the robbery of #ip

my question has to be this: why do we now go to the #classifiedadvertising we find on #searchengines and #socialnetworks and other sorts of apparent innovations?

well …

tbh, basically to read someone else’s unpaid-for content: what’s more, when a newspaper’s, quoted in full by a reader who in theory isn’t paying anyone for the honour, either.

this is not right. it happened with #search: that is, the robbery of #ip and content with clear #copyright. we shouldn’t allow it now to repeat with tools such as #chatgpt-x.

but can we square this circle to the satisfaction of all players?

why i’m of a mind now to propose a radically different approach to how #ai of any kind — never mind just #chatgpt-x & co — are trained and launched onto markets.


no. we don’t discard any #tech invented out of hand. ‘not suggesting this. but in #europe at least, in #sweden maybe to start with, the content used for the training of any #ai such as these must be duly paid for.

always. every use.

how? we could have a spotify-type platform which #ai developers could subscribe to, allowing for sanctioned access to all kinds of content, not just music of course.

and then the #ai tools would have certificates showing “denominación de origen” for all the #ip used to train up the #ai in question. and in their absence, the product could not be released in any legal form to the market.

this is practical; the streaming tools already exist and would allow for agile development to continue; and we would NOT repeat the daylight robbery conducted all those years ago under the banners of #search #classifiedadvertising.

wdyt?

there’s a business model in this too; not dissimilar at all to spotify as it stands.

no?

coffee, anyone?