Blog

What's new?

Mainstream tunnel vision, echo chambers, and filter bubbles

February 13th, 2018

Worrying about the trustworthiness and reliability of the information we consume has become one of the key concerns of our age - with some commentators even concerned that fake news, echo chambers, and filter bubbles are threatening democracy itself. Information professionals are skilled in finding reliable sources, taking a neutral standpoint, and understanding biases, but with the deluge of information being published nowadays, it is increasingly hard to assess every source. This leads to the danger that researchers settle into using only a small set of sources, but then miss early warning signs or alternative voices and contrarian viewpoints. Regulatory researchers need to be the first to know about developments in technology and new financial products, they cannot wait for the news to become "mainstream" and be covered by the major players, so "popularity" of a source is not necessarily a useful metric for them.

The mainstream dominated the past
Before the rise of the Internet, there was little coverage of non-establishment viewpoints by major media players, and alternative publications were easily distinguished from the mainstream, largely because you could tell they were low-budget productions. Anarchist or neo-nazi or new age or other “fringe group” newsletters were obviously photocopied. Special interest news published by non-profits or community groups might have higher production standards, but tended to be associated directly with the group funding the publication and clearly branded because they wanted everyone to know who they were – Greenpeace for environmentalist news, for example, or Amnesty International for human rights coverage. In other words, you only had to look at the publication to be able to tell where its biases were likely to be.

However, it was also very difficult to find contrarian or alternative voices that were raising new concerns, showcasing new technology, promoting new products, or discussing new ideas. Such sources were hard to find, and researchers often relied on personal social or academic networks to "stay ahead of the curvve". Journalists could build a reputation and a career by "knowing who to talk to" and researchers and analysts could gain an edge over their peers by building relationships with skilled librarians, who knew where to access hard-to-find but reliable sources, such as obscure but highly specialised technical or academic publications.

All sources look the same
The advent of the web promised a glorious “democratization” of the publishing process. At first, this was widely welcomed by many who thought that all the information they needed would become free and easy to find. Expensive technical sources would be made available to everyone. A single author blog would be as easy to find as a mainstream newspaper. Researchers thought their jobs would become simple - the obscure but interesting voices that might have been missed woould pop up with the same prominence as established sources. It was thought the cream would rise to the top, and alternative viewpoints would promote healthy debate and challenge the "establishment propaganda" of wealthy "old media" outlets that wanted to promote specific political agendas. 

However, it was not long before “mainstream” organizations with a lot of money were able to produce slick, well designed websites, which looked and worked differently to websites that had been hand-crafted by individuals or built by academics. Organizations with money were able to invest in marketing, SEO, and other promotional activities to ensure they transferred their dominance to the on line world. "Alternative" sites that were less search-engine friendly, or simply did not get the "Pagerank" popularity scores needed to put them on the first page of results began to slip back into obscurity. It was at least easy to tell the difference between an expensive and a low-budget site. Genuine academic sites had a certain look and feel, while individual blogs and personal sites tended to be far less complex than corporate sites. 

With the rise of high quality blogging software and falling costs of production technology, that gap closed, and those differences are now far more subtle. “Established” old media, such as local papers, have seen their budgets shrink, while technology has become cheaper, so anyone wanting to build a website from scratch with a limited budget can now produce a site that looks pretty much the same as an “established” one.

So, now we have satire, personal blogs, websites of “old media” outlets, and new sites that all look almost the same. That is the equivalent of your local anarchist collective being able to produce a newspaper that looks like Time magazine, and the National Enquirer looking much like The Economist, while the blogs of a reputable academic and an unqualified political pundit may look almost indistinguishable. 

Mainstream tunnel vision
The tsunami of information being generated means that once again, researchers are finding they have to rely on personal networks, individual recommendations, or expensive paywalled publications to help them find the signal amidst the noise. This limits the ability of researchers to gain a truly balanced overview of a subject and leads to echo chambers where groups of researchers use the same limited set of sources, without realizing they have closed themselves into a bubble.

Tracking the blogosphere can be especially labour-intensive, as new blogs need to be assessed for validity, authors identified and their credentials checked, and the rate of decay for online sources means that many excellent and useful blogs may only receive a few posts before fading away. Distinguishing the genuine bloggers who have valid points to make or interesting things to say from politically or commercially motivated spokespeople can be tricky. The incentive to spread biased information is particularly strong in the world of finance, where rumours can affect stock prices and real money in people's pockets. The tornado of opinion - good, bad, biased, and indifferent - swirling around the topic of cryptocurrencies is a case in point.

A huge problem for analysts and professional researchers is that an over-reliance on a manageable set of "trusted" sources, and a relatively limited network of personal contacts leads to the same information being circulated. There is a tendency to rely on a smaller and smaller set of "verified" sources, but that takes us back to the situation in the past, where only a few loud well-funded voices were heard. The interesting minority voices were just too hard to find, and too buried amidst the mess of propagandists, amateurs, and cranks. 

This is particularly dangerous when it leads to an echo chamber of groupthink, as the alternative voice or warning signal gets lost. Once inside a filter bubble, the researcher is in danger of thinking they have the full view, but in fact are only seeing a tiny slice, which is subject to selection bias. Traditional search engines exacerbate this problem, by using their relevancy algorithms to personalise for maximising advertising revenue, rather than ensuring the searcher is offered a complete and comprehensive overview of available sources. 

Mass-market search engines want to provide quick and easy answers to a time-hungry demanding public who are not professional researchers and who have little interest in isues like confirmation bias and objectivity. Mass market search engines rely on advertising revenue, so it is in their best interests to flatter their users, not to challenge them by showing them surprising or unexpected content. Everyday non-professional searchers do not need a fully comprehensive view of all angles of a subject - they just want a "good enough" answer. So, the "safe bet" is to serve up sources your readers have selected before and not risk challenging them with anything new. In other words, searchers who are corralled into a tunnel, and only see a small set of "safe bet" sources do not realise that their tunnel vision, however comforting, is not giving them the full picture.

Technology encouraged us into bubbles, can technology help us break out?
We are working on ways to use our understanding of relevancy to identify not just the obvious mainstream sources, but also the more interesting minority voices that researchers need to see. We are looking at ways to give researchers the ability to tune their research - by focusing on blogs rather than newspapers, for example - without the inconvenience of having to manage lots of different specialized search tools.

For regulatory analysts who need to be "ahead of the curve", waiting until a story is covered by a reputable mainstream source is too late. In our fast-paced world, once the mainstream media have discovered a story it is "old news". Regulators need to be able to be pro-active and pre-emptive. Analysts need to be the first to know, but unlike the olden days when the "scoop" was a rare commodity, nowadays the scoops are buried under mountains of junk. Only offering "popular" results is therefore not enough, and so we will allow researchers to look for "obscure" as well as "popular" sources.

In the past, researchers could work with expert librarians who knew how to find a range of sources and assess them for biases and quality, including obscure but valuable sources, but with so many millions of articles published online per day, no human librarian or researcher can manage the load by themselves. Just hoping that somehow you will happen upon that nugget of information or that hidden gem of a source does not give researchers confidence that they have done a thorough job. The OpenReg search engine will give researchers the confidence to know that even without a personal expert librarian to help them, their work is thorough, complete, and reliable.


Agile Tour Montreal

November 30th, 2017

 

It was a pleasure and an honour to speak at the Montréal Agile Tour on November 30th. It was a great chance to listen to world-class presenters and to meet lots of interesting people.

 

I talked about working with the Autorité des marchés financiers (AMF), and the parallels we have noticed between the five stages of the regulatory process and Agile principles, as well as their commitment to innovation. The AMF follow Agile for their own software development processes, and are pioneering in their support for innovation, FinTech and RegTech - for example through their innovation sandbox, where StartUps can trial new processes and procedures to make sure they are compatible with regulatory requirements. Our existence as a StartUp is evidence of how much the AMF supports Montréal's innovation ecosystem and we are hugely grateful. I also gave some examples of the difficulties we faced as a brand new StartUp of "hitting the ground Agile"  - we had no budget and a fluid team with lots of volunteers rather than a settled team with existing  infrastructure to support us. One thing that surprised me was how much I had come to rely on software tools like JIRA to manage the Agile process, and how much software can shepherd you along a certain path, so it was a great exercise to get back to core Agile principles and create a system that we could manage with pen and paper and lots of sticky notes stuck to walls. Our sprints were somewhat fluid and we struggled to figure out velocity very effectively, but nevertheless using Agile gave us a solid project management framework and we were able to produce our first user-testable prototype within 3 months.

 

There were lots of tracks and too many appealing sessions for me to attend all, but I enjoyed learning from Patrick Gagné about Téo Taxi  - a wonderful example of a socially innovative enterprise, supporting eco-friendly driving and with good terms and conditions for drivers. Uber could certainly learn a thing or two from them!

Marilyn Powers and Sue Johnston held a lively session "Do your product owners speak a foreign language?" in which they offered various techniques and methods for encouraging better communication. This is one of my pet topics - I would say it is not just Product Owners but pretty much every group in an organization that develop their own language, and we then have to spend a lot of time making sure we are translating properly. Particularly tricky terms I have encountered are tag, entity, document, archive, category, validation, model, client, and cluster. It can be interesting to see how many different meanings of those terms you can collect from within a single organization. One of the reasons I appreciate Agile, and user stories in particular, is the emphasis it places on ensuring everyone understands what everyone else is actually talking about.

 

Jeff Kosciejew  and Ellen Grove described the process of implementing Lean and Agile at a Bank. This was a lively account of how not to do Agile, followed by how to salvage the situation and turn it into success. 

Forecast your project like a hurricane was sound advice from Daniel Vacanti - a reminder that you need a plan, and you need to update your plan frequently because it will change. Some people get disheartened by uncertainty, but once you start thinking probabilistically it become easier to make sensible predictions.

 

The highlight of the day was the closing conversation with Henry Mintzberg  - a brilliant and thought-provoking speaker, never afraid to speak honestly and authentically about his beliefs and concerns for the future. He spoke of the need for Rebalancing Society, as our world - especially the USA - is becoming unbalanced, with the market sector aggressively trying to dominate every aspect of life, and that this is leading to inequality and social distress. The private sector is a great and wonderful thing, but lacks human values, and if we allow the market to dominate, we damage society. For example, making huge profits out of pharmaceuticals and health care is essentially forcing people to choose between financial ruin or death in order to make others rich (and interested me as an example of why we need good and effective regulatory authorities). He pointed out that we routinely refer to people as "human resources" but that in itself is demeaning - taking away our humanity and comparing us to lumps of coal or pieces of wood to be used or discarded at will, simply in order to generate profit.

 

It is easy to get caught up in the routine tasks of running a business, but no business exists in isolation from society and the wider world, and certainly no business exists without its people.


What's the difference between searching and browsing?

November 9th, 2017

Even though many people think they are the same, searching and browsing are very different. Now that people rely on search engines, it is much harder to browse information, so it is much harder to see how the information you find fits into a broader context. With browse, it was much easier to get an overview of a whole subject area, and much easier to see what the main categories within that subject are. This process made it much easier to "know what you don't know" and much harder to disappear into an "echo chamber" where you forget that there are other opinions out there.

 

Professional researchers need to know how any piece of information fits into a broader context, to consider contrarian as well as mainstream opinions, and watch out for outliers. The loss of browse in favour of search has certainly made this process far harder, and is why so many researchers have asked us for research tools that make suggestions and offer recommendations of what to search for.

 

Here are some of the key differences in what searching and browsing are best at:

 

  • Search is making a beeline to a known target, browse is wandering around and exploring.

  • Search is for when you know what you are looking for, browse is for when you don’t but want to find out.

  • Search is for when you know what you are looking for exists, browse is for when you don’t know what information is out there, but you want to discover.
     

  • Search expects you to look for something that is findable, browse shows you the sort of thing you can find.

  • Search is for when you already know what is available, browse is how you find out what is there, especially if you are a newcomer.

  • Search is difficult when you don’t know the right words to use, browse offers suggestions.

  • Search is a quickfire answer, browse is educative.

  • Search is about one-off actions, browse is about establishing familiar pathways that can be followed again or varied with predictable results.
     

  • Search relies on the seeker to do all the thinking, browse offers suggestions.

  • Search is a tricky way of finding content on related topics, browse is an easy way of finding related content.

  • Search is difficult when you are trying to distinguish between almost identical content, browse can highlight subtle distinctions.

  • Search rarely offers completeness, browse often offers completeness.
     

  • Search is pretty much a “black box” to most people, so it is hard to tell how well it has worked, browse systems are visible so it is easy to judge them.

  • Search uses complex processing that most people don’t want to see, browse uses links and connections that most people like to see.

  • Search is based on calcuations and assumptions that are under the surface, browse systems offer frameworks that are more open.

 

One of the challenges of helping researchers navigate the firehose of online information is to find creative and engaging ways to offer browse-type overviews of what is available and help researchers discover information they would not have thought of searching for by themselves.


The Scale of information overload

Aug 21st, 2017

We have faster, easier access to information than ever before, but we are also generating information at a remarkable rate. The New York Times alone publishes about 230 pieces of content, plus several hundred stories from wire services every day. The Wall Street Journal publishes around 250 stories, and The Washington Post around 1,200 pieces of content. So at a conservative estimate, these three sources alone are publishing some 2,000 items every day - that's 600,000 per year. There are hundreds of news outlets across the world - some 25 major newspapers in Canada alone.

Less established sources are also generating content at an extraordinary rate - Buzzfeed publishes around 6,000 stories a month, but it is the blogosphere that is really mind-boggling. Over 3 million blog posts are published every single day - and the number seems to be rising.

Add to that newsletters, whitepapers, books, podcasts, journals, conference proceedings, social media - and it is not surprising that the feeling of being 'well informed' has become elusive, even for professional researchers. 

 

Limiting your research to specific subject areas helps a little, but the pace of change and the volumes of associated data is breathtaking. For example, if you want to survey the business landscape in Canada, you will find that there were almost 80,000 new businesses established in 2013, and the number appears to be growing each year. Even an extremely narrow area - cryptocurrency innovation, for example - moves at breakneck speed. There are now over 900 cryptocurrencies, all of which have come into existence over the past few years. Compare that to the mere 180 fiat currencies - a figure that has been fairly stable for decades (an exception being at the start of the 1990s, following the collapse of the Soviet Union, when newly independent nations established their sovereign currencies).

Keeping up with financial instruments is no easier. Even though there are only 60 stock exchanges in the world, the number of financial products is huge. The London Stock Exchange lists over 4,000 financial instruments (including 1,500 ETFs).

 

It is no wonder that researchers tell us they want to use the most cutting-edge techniques in machine learning and AI to help them select, sort, filter, and analyse the 'news'.

Sources: 
http://www.niemanlab.org/reading/how-many-stories-do-newspapers-publish-per-day/
http://www.worldometers.info/blogs/
http://www.statcan.gc.ca/
http://www.londonstockexchange.com/statistics/companies-and-issuers/companies-and-issuers.htm