[P2P-F] Fwd: Emailing: Artificial Intelligence + Algorithms = Assumptions!.htm

Michel Bauwens michel at p2pfoundation.net
Fri Nov 4 07:44:01 CET 2016


---------- Forwarded message ----------
From: Hazel Henderson <hazel.henderson at ethicalmarkets.com>
Date: Thu, Nov 3, 2016 at 9:43 PM
Subject: Emailing: Artificial Intelligence + Algorithms = Assumptions!.htm
To: parminder <parminder.js at gmail.com>, parminder <parminder at itforchange.net
>
Cc: Michel Bauwens <michel at p2pfoundation.net>, "othernews at netcabo.pt" <
othernews at netcabo.pt>, "Thalif Deen (thalifdeen at aol.com)" <
thalifdeen at aol.com>, "kama at netcabo.pt" <kama at netcabo.pt>, "Kuttner, Robert (
kuttner at prospect.org)" <kuttner at prospect.org>, "Edward Fullbrook (
pae_news at btinternet.com)" <pae_news at btinternet.com>, LaRae Long <
gogreen at ethicalmarkets.com>


  Hi Parminder :   In case you missed this editorial  I wrote in August,
please feel free to  circulate to  ISF participants.

Thanks ,

Hazel

[image: CSRWire.com The Corporate Social Responsibility Newswire]
<http://www.csrwire.com/>[image:
http://www.csrwire.com/images/search-btn.jpg][image: Icon_home_off]
<http://www.csrwire.com/>Csrlive <http://www.csrwire.com/csrlive>
------------------------------

[image: email] <http://www.csrwire.com/manage_user/subscription_settings>

[image: http://www.csrwire.com/images/bread_divider.gif?1456949413]Artificial
Intelligence + Algorithms = Assumptions!
<http://www.csrwire.com/blog/posts/1753-artificial-intelligence-algorithms-assumptions>
CSRwire Talkback

| join the conversation

[image: Print icon]Print

[image: Alerts icon]Alerts
<http://www.csrwire.com/manage_user/subscription_settings>
Artificial Intelligence + Algorithms = Assumptions!

Submitted by: Hazel Henderson
<http://www.csrwire.com/blog/bloggers/23-hazel-henderson/posts>

Posted: Aug 12, 2016 – 06:00 AM EST

Tags: artificial intelligence
<http://www.csrwire.com/blog/posts/tag/artificial%20intelligence>, ai
<http://www.csrwire.com/blog/posts/tag/ai>, economy
<http://www.csrwire.com/blog/posts/tag/economy>, algorithm
<http://www.csrwire.com/blog/posts/tag/algorithm>



[image: Hazelhenderson]

Artificial intelligence (AI) has emerged in the public debate with Deep
Mind’s recent win over a champion of the Chinese game GO.  AI is all about
computers getting better at solving problems formerly thought too difficult
which should be left to humans.  Since the 1970s, a small group of computer
specialists and mathematicians based their hopes on teaching machines to
follow the rule-based learning of human reasoning.  They designed
algorithms (coding these rules into software programs) which they hoped
would enable computers to emulate human thought processes.

Today, these algorithms run more and more of our everyday lives.  They can
decide whether our credit score is good, if we get hired for a job, whether
we can board an airplane or get admitted to the country of our
destination.  Algorithms dominate the world’s stock exchanges, evaluating
most companies and deciding whether to buy or sell, as well as causing
“flash crashes”.  Political campaigns are based on algorithms deciding
which voters will turn out and which candidates they’ll favor.  The much
vaunted Internet of Things (IoT) uses algorithms to monitor our babies,
open our locks, control our use of energy, steer our cars and oversee our
fitness programs and diets.  Algorithms control which ads we see online,
monitor our buying habits and track our whereabouts by GPS and our
smartphones.  Increasingly, algorithms program drones and weapons systems.

This brave new world of algorithms and big data has taken over the
economies of most post-industrial societies and the lives of their
citizens.  Most relish the new connectivity, social media and instant,
always-on lifestyles – happily surrendering their most personal information
and privacy.  Few asked about the assumptions and biases that those human
programmers may have installed in all these algorithms.

New evidence is coming to light as to how these human biases unconsciously
held by most people can skew algorithms, including gender and racial biases
that affect job selection.  In *New Scientist* (July 16, 2016), Aviva
Rutkin describes how algorithms with such hidden assumptions have denied
people credit, jobs and even parole, and hails the new General Data
Protection Regulation (GDPR) recently approved by the European Parliament.
This GDPR calls for companies to prevent such discrimination, which filters
through algorithms and is hidden in the guise of mathematical
impartiality.  Rutkin also cites the US White House symposium on AI which
explored such issues and how Silicon Valley programmers and technocrats can
arbitrarily dictate policies affecting the lives of citizens.

An egregious example of a faulty algorithm was exposed by a team of Swedish
scientists in *The Economist* (July 16, 2016) who discovered that 40,000
brain research studies based on using functional MRI scans of brains were
invalid due to misinterpretation of blood flow patterns assumed in the
algorithms they applied.  Another is the business-as-usual assumptions in
the International Energy Agency’s forecasts failing to track the shift from
fossil fuels to renewable energy.

Debates about AI, robots, job losses and the privacy and security issues
have surfaced in many countries, including who owns all this personal data
being patterned into algorithms by spy agencies, social media companies,
advertisers, marketers, insurance and banking firms, law enforcement.  All
this data is valuable and vital to our information-based economies.
Microsoft’s former chief scientist, Jaron Lanier, in *Who Owns the Future*
(2012), claims that Google, Facebook, Amazon, LinkedIn, Snapchat, Instagram
should pay every user for each and every bit of their personal
information.  Lanier notes their business models sell this data to
advertisers, insurers, bankers, political campaigns, so each user should
receive payment for their data – quite feasible with existing software.

The public is awakening to this new threat of big data as “Big Brother”
while acknowledging all its potential benefits.  We do not need many of the
idiocies promoted for profit in the Internet of Things.  For example, the
Parks Associates survey found that 47% of US broadband households have
privacy or security concerns about smart home devices.  Tom Kerber,
Director of Research, cites recent media reports of hacking into baby
monitors and connected cars and suggests that if firms offered a bill of
rights to consumers, this might ease concerns.  At least, a standard for
all smart devices should allow users to switch off their connectivity and
operate them manually.  Oxford University’s Future of Humanity Institute’s
paper by Deep Mind’s founder Demis Hassabis advocates such “off switches”
for AI systems (*The Economist*, June 25, 2016).  How would such safeguards
work in electronic finance?  We as consumers also need greater access to
monitor all the assumptions in algorithms that run our lives.

New Cities Foundation Greg Lindsay reports in Smart Homes and the Internet
of Things
<https://issuu.com/atlanticcouncil/docs/smart_homes_0317_web/1?e=23027907/34227438>
that 66% of smart phone users are afraid of these devices tracking their
movements.  The Atlantic Council’s March 2016 seminar on “Smart Homes and
Cybersecurity” concluded it’s already too late to protect homeowners and
other users.  Google founder Larry Page is pouring billions into developing
flying cars (“Propeller Heads”, *Businessweek*, June 6, 2016).  Is this
what the public wants?  Automotive engineer Mary Louise Cummings of Duke
University testified at a recent Senate hearing on driverless vehicles at
which Google, General Motors and Ford were requesting over $3 billion in
subsidies.  She noted that these companies had done no real testing of
driverless vehicles and doubted they could be both autonomous and safe.
What would flying cars and drones for the 1% do to our already crowded
skies and quality of life for the 99%?

All this is why Ethical Markets proposes a new standard to shift the
balance of power back to consumers and citizens: a new Information Habeas
Corpus.  Britain in 1215 adopted the rule of Habeas Corpus, which assured
individuals’ rights over their own bodies, further codified by Parliament
in 1679.  Today, we need to extend this basic human right to our brains and
all the information we generate in all our activities.  Time for the
Information Habeas Corpus!

[image:
http://s3.amazonaws.com/csrwire-production/system/web_images/images/4811/original/CSRtalk_button4.png?1424291173]
<https://twitter.com/hashtag/csrtalk>

« Previous
<http://www.csrwire.com/blog/posts/1754-obras-por-impuesto-public-private-partnership-portfolio-exclusive-for-latin-firms>
Next
»
<http://www.csrwire.com/blog/posts/1752-towards-9-billion-do-we-face-the-future-with-hope-or-fear>

*The opinions, beliefs and viewpoints expressed by CSRwire contributors do
not necessarily reflect the opinions, beliefs and viewpoints of CSRwire.*
Blog Links

home <http://www.csrwire.com/blog> bloggers
<http://www.csrwire.com/blog/bloggers> series
<http://www.csrwire.com/blog/series> mobile
<http://www.csrwire.com/blog/posts.mobile> rss
<http://www.csrwire.com/blog/posts.rss>
Recent Posts

*02 Nov* Dave Thomas Foundation Helps Bring Foster Kids and Families
Together
<http://www.csrwire.com/blog/posts/1772-dave-thomas-foundation-helps-bring-foster-kids-and-families-together>

*28 Oct* Fostering an Environment of Social Responsibility in Your Industry
<http://www.csrwire.com/blog/posts/1771-fostering-an-environment-of-social-responsibility-in-your-industry>

*26 Oct* Honest Dialogue: The Key to a Sustainable Future
<http://www.csrwire.com/blog/posts/1770-honest-dialogue-the-key-to-a-sustainable-future>

*24 Oct* Stop! You’re Approaching Engagement Through CSR All Wrong
<http://www.csrwire.com/blog/posts/1769-stop-youre-approaching-engagement-through-csr-all-wrong>

*21 Oct* Replacing CSR with Human Social Responsibility: People First
<http://www.csrwire.com/blog/posts/1768-replacing-csr-with-human-social-responsibility-people-first>
Search The Blog

[image: http://www.csrwire.com/images/search-btn.jpg]

[Select From Archives \/]

[Go]

[CSRwire Expert Bloggers \/]

[Go]
Twitter

Tweets by @CSRwire <https://twitter.com/CSRwire>



©2016 CSRwire, LLC. All Rights Reserved.

Terms of Use <http://www.csrwire.com/terms>Privacy Policy
<http://www.csrwire.com/privacy>Site Map <http://www.csrwire.com/sitemap>

Issuers of news releases and not csrwire are solely responsible for the
accuracy of the content

Web Design & Development by Fuzz Productions
<http://www.fuzzproductions.com> & Singlebrook <http://www.singlebrook.com>



-- 
Check out the Commons Transition Plan here at: http://commonstransition.org


P2P Foundation: http://p2pfoundation.net  - http://blog.p2pfoundation.net

<http://lists.ourproject.org/cgi-bin/mailman/listinfo/p2p-foundation>Updates:
http://twitter.com/mbauwens; http://www.facebook.com/mbauwens

#82 on the (En)Rich list: http://enrichlist.org/the-complete-list/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.ourproject.org/pipermail/p2p-foundation/attachments/20161104/2fa03313/attachment-0001.html>


More information about the P2P-Foundation mailing list