i said the other day i probably wasn’t suited to the fields of #lawenforcement and #security: i’m a free-thinker, a nonconformist in some serious senses, and almost certainly neurodiverse in others. people who work in the aforementioned fields need to be attached to rules, regulations, procedures and tasks. that makes it hard sometimes for them to appreciate the kind of person i often can be.
generally, not them. which makes me no better than them at all. nor them anything but different from me.
but that doesn’t mean we mightn’t be able to connect the two ways of being to better catch a creative criminality:

it’s my assertion and firm belief that we’re missing out on neurodiverse ways of seeing for understanding better the world of #complexproblems around us. and this is, partly, by using technologies which, perhaps unconsciously, have become firmly neurotypical — but are no less neurotypical for that. technologies which, as a result, reinforce the ways of seeing and doing that most of the world’s professionals need to share, rather than encourage them to have a broader take on that world.
i think we can do much better: i think we can bring the neurodiverse and neurotypical together: not just from the point of view of company inclusion policies and so forth; much more by engineering different #it-#tech architectures.
exactly as what follows, in fact — here, in a separate field, a proposed roadmap for dealing with the #complexproblems of climate change:






















so to finish this post, something that happened to me today just to show i might — as a different kind of thinker from those who usually work in such fields — be able to usefully contribute, in some capacity of due utility even as i remain such a thinker, to the reality that has become deeply creative criminality: what has been called #darkfigure since the 19th century; and which, for a couple of years now, i’ve preferred to call #neocrime.

the anecdote in question:
here’s an example of my intuition in action. and i might be totally wrong. what i want to do is not prove i am right but absolutely clearly be able to share, without anyone being able to disagree, that i am wrong …
“that gangster-looking guy wanted three things at least potentially, when he asked me to use my card in exchange for his cash, for a pizza order he said he wanted to make:
1. get my card number from his mate at the pizza place.
2. give me counterfeit cash so i’d get into trouble when i tried to use it.
3. see if he could identify the name of my iphone with an excuse to approach me (i was tethering to my laptop at the time) in order for him and his mates to be able to sniff when i was using it in the future.
if i am right about him being a gangster, he had already inhibited me (tried to) by standing near the wall and not moving an inch as i tried to get by behind him, when he was looking at his phone in front of the lift on the landing on floor 1 yesterday.”
as i say, i might be wrong totally about him. he might be a humanitarian of the very best.
but what if we could create systems which didn’t prove we were right … but validated whether or not we were wrong! that is, that i was wrong.
and just to frame it better:
• he was at the hotel i am staying at
• i was working for hours at my laptop in a darkened corner: so he had every reason — seeing me wrapt up so intently in my work — not to approach me
• the receptionist (according to the guy) had already refused to take his cash
• no one uses cash in stockholm
and so for all these reasons, i actually think this might have been an example of #darkfigure waiting to happen.”
of course i could be exhibiting a dreadful prejudice. but this, precisely this, is why i want us, together, to develop systems where we can enter into our deepest thoughts and make it possible for us not prove what we think true — but validate (an utterly different matter altogether) whether true or no.
just this.
