The beauty of such Big Data applications is that they can process Web-based text, digital images, and online video. They can also glean intelligence from the exploding social media sphere, whether it consists of blogs, chat forums, Twitter trends, or Facebook commentary. Traditional market research generally involves unnatural acts, such as surveys, mall-intercept interviews, and focus groups. Big Data examines what people say about what they have done or will do. That's in addition to tracking what people are actually doing about everything from crime to weather to shopping to brands. It is only Big Data's capacity for dealing with vast quantities of real-time unstructured data that makes this possible.
This Scientist Uses The New York Times Archive To Eerily, Accurately Predict The Future... - 0 views
Use Big Data to Predict Your Customers' Behaviors - Jeffrey F. Rayport - Harvard Busine... - 0 views
-
-
Much of the data organizations are crunching is human-generated. But machine sensors — what GE people like CMO Beth Comstock called "machine whispering" when I talked with her this past summer — are creating a second tsunami of data. Digital sensors on industrial hardware like aircraft engines, electric turbines, automobiles, consumer packaged goods, and shipping crates can communicate "location, movement, vibration, temperature, humidity, and even chemical changes in the air."
-
the number of Google queries about housing and real estate from one quarter to the next turns out to predict more accurately what's going to happen in the housing market than any team of expert real estate forecasters. Similarly, Google search queries on flu symptoms and treatments reveal weeks in advance what flu-related volumes hospital emergency departments can expect.
- ...1 more annotation...
All Hail the Generalist - Vikram Mansharamani - Harvard Business Review - 0 views
-
the specialist era is waning. The future may belong to the generalist.
-
there appears to be reasonable and robust data suggesting that generalists are better at navigating uncertainty.
-
Professor Phillip Tetlock conducted a 20+ year study of 284 professional forecasters. He asked them to predict the probability of various occurrences both within and outside of their areas of expertise. Analysis of the 80,000+ forecasts found that experts are less accurate predictors than non-experts in their area of expertise.
- ...3 more annotations...
One Tweet Can Kill A Market, Many Tweets Have Little Effect - ReadWrite - 0 views
-
what about Cisco? Stock is up 19% over the last six months too, while Twitter reflects 82% bearish sentiment. Apple? Stock is down 33% in the last six months, while Twitter sentiment remains neutral.
-
some stocks like Microsoft accurately reflect their Twitter sentiment: Microsoft is up 23% in the last six months, with 71% of tweets cheering the company on.
Can Artificial Intelligence Like IBM's Watson Do Investigative Journalism? ⚙ ... - 0 views
-
Two years ago, the two greatest Jeopardy champions of all time got obliterated by a computer called Watson. It was a great victory for artificial intelligence--the system racked up more than three times the earnings of its next meat-brained competitor. For IBM’s Watson, the successor to Deep Blue, which famously defeated chess champion Gary Kasparov, becoming a Jeopardy champion was a modest proof of concept. The big challenge for Watson, and the goal for IBM, is to adapt the core question-answering technology to more significant domains, like health care. WatsonPaths, IBM’s medical-domain offshoot announced last month, is able to derive medical diagnoses from a description of symptoms. From this chain of evidence, it’s able to present an interactive visualization to doctors, who can interrogate the data, further question the evidence, and better understand the situation. It’s an essential feedback loop used by diagnosticians to help decide which information is extraneous and which is essential, thus making it possible to home in on a most-likely diagnosis. WatsonPaths scours millions of unstructured texts, like medical textbooks, dictionaries, and clinical guidelines, to develop a set of ranked hypotheses. The doctors’ feedback is added back into the brute-force information retrieval capabilities to help further train the system.
-
For Watson, ingesting all 2.5 million unstructured documents is the easy part. For this, it would extract references to real-world entities, like corporations and people, and start looking for relationships between them, essentially building up context around each entity. This could be connected out to open-entity databases like Freebase, to provide even more context. A journalist might orient the system’s “attention” by indicating which politicians or tax-dodging tycoons might be of most interest. Other texts, like relevant legal codes in the target jurisdiction or news reports mentioning the entities of interest, could also be ingested and parsed. Watson would then draw on its domain-adapted logic to generate evidence, like “IF corporation A is associated with offshore tax-free account B, AND the owner of corporation A is married to an executive of corporation C, THEN add a tiny bit of inference of tax evasion by corporation C.” There would be many of these types of rules, perhaps hundreds, and probably written by the journalists themselves to help the system identify meaningful and newsworthy relationships. Other rules might be garnered from common sense reasoning databases, like MIT’s ConceptNet. At the end of the day (or probably just a few seconds later), Watson would spit out 100 leads for reporters to follow. The first step would be to peer behind those leads to see the relevant evidence, rate its accuracy, and further train the algorithm. Sure, those follow-ups might still take months, but it wouldn’t be hard to beat the 15 months the ICIJ took in its investigation.
1 - 6 of 6
Showing 20▼ items per page