but i do i make sense (in worlds of incense)

My latest #poem on #siliconvalley’s corrupting choices, plus a link to the new online #whitepaper on #newlean … and other observations …


i'm incensed to be honest 
and yet not:
no gorgeous odour drifts like a breeze
of glorious glees
nor assails plentifully my senses

but incensed i remain even so
at how rich people remain rich
by plundering our brains
and attacking our better selves
whilst all we had been asking for was this:

to be human to the max
to enjoy being in the sack even
with someone who hadn't met us online
before we began face-to-face to vibe ...
... who hadn't had us sussed

and hadn't trussed us up mentally
so that they saw us way earlier
than we ever saw them
via that den of thievery
that is silicon valley since forever and all

because the real problem now
is that surveillance to this max
which we conduct on each other
and use to blather and gossip about
so that the outcomes

become attempted suicides
and depressions of the most revealing
and the permanent anxieties
of living a life
as if it were a death on legs

that's what big tech has brought us to
and that's why i'm incensed as i am
and angry so very much
that the human touch even that
is now a monetisable gap

that an elon and satya
can invade and intrude with their choices
to make the kind of technologies that hurt
and be leaders of the worst
instead of wanting to be peers of the first

yes i am incensed in the western sense:
angry to the very maximum
and it seems my words fall
on barren grinds
of coffees never exchanged

yet in a more eastern sense
i smell your comprehension growing
and see you showing
an embrace approaching
to my ideas of #intuitionvalidation

and it's this incense i sense
of the flowers and the astringent
and the spices of natural inclinations
which i now turn to
as i turn my back on the west i was born to

because it's here i find my pals of thought
where nothing needs repeating
and nothing needs reasserting
and all is agreement and fulsome assent
and that's what it's all about:

knowing how your place
is no longer where you are
nor has ever been perhaps
and maybe it's never been so
and just time ... just time to go"

choose people to work with, not their institutions

https://www.linkedin.com/posts/jewel-pena-53257213_activity-7041962988248952832-m7Pq

i realised this not long ago. i don’t want to work with this company or that. i want to work with people who also, simultaneously, may work for one company or organisation or another.

when the institution overrides the individual from the start is when, even if all the starting-signs are cool, the individuals will one day — especially when huge amounts of money all of a sudden become likely — be inevitably and overwhelmingly overridden by their instituitional framework.

i don’t intend for us to start as i don’t mean us to go on.

so first i want to meet people. i want an organisational structure which generates a hybrid of #holacracy. and i want brains to show themselves the most important factor and matter of all, where the overarching #gutenbergofintuitivethinking and #intuitionvalidationengine projects are concerned.

www.ivepics.com

because if you choose people first, and the people are right for you, then the institutions automagically will remain so too.

at least … whilst your people of choice remain at the institution in question. and they will do in general, for sure — if the institution remains worth then staying. for in the question of #intuitionvalidation there is no building-block which is more significant than the human as individual.

thephilosopher.space/space-1-0-the-philosopher-space

platformgenesis.com

www.secrecy.plus

omiwan.com

mils.page/presentations | #milspage #presentations

mils.page/intuition-day | #milspage #intuitionday

mils.page/distributed-privilege | #distributedprivilege

stop saying what #ai can do and start saying what needs to get done

introduction:

to not connect when something only unsatisfactorily connects is a really important skillset and aspect which the good human brain exhibits.

the capacity to forget for a bit is a human virtue of the highest thinking.

to create a future NOT based out of the past is what separates humans from our tools. (even as sometimes we ourselves are turned abusively into aspects of the latter.)

post:

so: #lean and startup-land solve one problem and one problem always. the problem they solve is not what the customer is hurting most about. it’s not the pain in the customer journey as tech dogma claims it suggests.

no.

it’s the pain on the journey on which both client and supplier begin to ride together. it’s the pain-point that the two of them can agree on as being where they must meet:

  • not what i as client most need sorting necessarily, but what i can afford right now according to your price list.
  • not even what you as supplier are most objectively suited to resolving but, rather, what you as supplier find it easiest to convince me is what i need, once you’ve identified where it is i am most prepared to first stump up some dough. (and with this, any dough ffs. just so long as some.)
  • and finally, most definitely, in no way do such processes and spaces guarantee at all clearly that the world’s most complex (note: not complicated) and, therefore, pressing problems will find their solutions even attended to never mind provided by such startup ecosystems, business mindsets, and wider ways of working deeply.

speculative tech like #ai — it’s supposedly been beating humans incompletely since at least the 1980s (and probably for decades before) — always needs to convince us it’s just about there. the funding, for one thing, demands:

it still isn’t. here are my reasons why:

1. until an #ai (let’s use this loose term for now) is able to get so frustrated with an insoluble problem that they end up getting blind drunk … only to wake up in a haze to a solution perfectly formed …

2. until an #ai awakes from a beautiful dream with a fabulous new idea subconsciously imagineered to perfection overnight … for it then, in the second of that awakening, to forget totally the detail — or even the concept itself — for six further months … and then for everything to come flooding back in even more astonishing form, to be finally recorded, scoped and amazingly implemented …

3. until an #ai may choose to NOT make connections or identify patterns or see relationships or deliver on finalities … because something doesn’t fit quite as right as it one day might …

4. until an #ai has a leap of faith based on data it INVENTED OUT OF THE FUTURE … because creation is the nature of the human brain … even when utterly utterly mindlessly terrifying:

until all the above and other things happen .. well … our dearly beloved proponents of #ai (thus loosely described) will continue to argue that we can do everything on the basis of what’s already been patterned. and that the apex of human achievement invokes ONLY copying what others have already done — just better: that is, faster, more accurately, for longer without tired — essentially, an alpha male’s orgasm … no?

🙂

and thus the competent delivery of #ai and similar technologies only ever needs to aspire to this.

the new #ai unleashed onto the world recently — #chatgpt — is praised for writing indistinguishably school and university essays. it’s praised for giving advice crackpot-like or otherwise. it — and similar tech — receive plaudits for faking flemish masters’ painting styles. i haven’t seen but surely imagine that songs in the key of sheeran have already flooded the social networks of this planet, devised at the hands of this sort of code.

we need to stop this.

we need to stop saying what #ai can do.

we need to start asking two questions:

1. if #ai can do what a human being can do, is it actually an achievement of historical record and delivery that another human being can do exactly the same as the human process that the #ai in question is also able to do? that is: why WANT to indistinguishably copy a flemish master? why not always prefer to discover the master — or mistress — deep and unrevealed inside your very own being?

because whilst copying is cool, it’s only cool if it leads to synthesis and final originality. after all, in the act of production we may uncover the output of creation. and then understand, perhaps, that there might inhabit well inside our persons — each of us — geniuses of magnificent shine:

(equally, of course, we might ask — as rightly people have — if a machine so easily reproduces a human activity such as writing an essay for university or school, whether our civilisation is focussed on the right testing regimes in the first place. but that’s for a quite different post.)

2. it’s bad enough to find ourselves needing to ask whether what our tech can duplicate in ourselves involves things which we might continue ourselves — in the absence of such tech — to even consider it societally desirable to carry on doing. it’s much worse to say a whole tech ecosystem — that which is structured around #lean, customer journeys, pain-points and the overbearing need to invoice a digital service or product asap (in short, startup land as we have known and loved it for so long) — doesn’t solve the vast majority of remaining deep and complex world problems because, actually, its practitioners and owners have designed it specifically so it wouldn’t.

there’s much more money in planning a drip-feed of ten years of product releases than in creating a system which might identify the real dangers of immensely, horrifyingly human #nonconformism: that is, its thinking processes and the disadvantage we now face, as we have become progressively incapacitated to do this sort of activity in the corporate-sanctioned dynamics of diluting teamwork.

because whilst we in corporate-land choose to inhibit creative #nonconformists via our tech architectures and business structures, the putins of the world have clearly been doing — longitudinally — quite the opposite.

haven’t they?

so let’s not only stop to duly think whether a tech that can repeat what the majority of us humans do every day is worth the thunder and lightning its proponents will continue to generate. yes: it’s useful in many respects, of course. but we must stop asking what can our tech do and start, instead, to focus our funding on OUTCOMES that need DELIVERY, not BOTTOM LINES that must be DELIVERED.

otherwise, we get what we get: and so we have.

an example? just one? here’s one … a western defensive and attacking capability to identify and forestall future ukraines, and maybe now, as we are allowed to voice these things finally … alongside neo-terrorist pandemics … before they even become a glint in evil organisations’ eyes:

summary:

let’s just stop, shall we?

let’s stop being bad techies.

let’s stop fetishising the machines whose investments have cost us so much, and whose future ROIs we have planned out to the millimetre.

let’s be human, instead.

let’s say:

  • we can make money out of making humans more human, not less significant
  • we can make money out of prioritising solutions that cure rather than perpetuate
  • we can choose to focus on complex and spread the word that this won’t mean complicated
  • and we can finally argue that the startup ecosystems various that we have become sooo used to — as they provided the only furniture with which we have lived our comfortable lives — do have their seriously disadvantageous downsides: whilst simultaneously (of course) solving some of the accumulatively littler things fabulously, equally they have served to diminish the big things gravely — especially when it is becoming increasingly apparent that this has also been a deliberated dynamic all along …

because just because you’ve invented a system which does lots of relatively small things massively — that is, automatedly — doesn’t mean you are anywhere capable or prepared re delivering on the resolution of massive things pointedly.

and, after all, this is a difference we are currently choosing not to even discuss.

no?