Datawocky: How Google Measures Search Quality - 0 views
-
The heart of the matter is this: how do you measure the quality of search results
-
The first is that we have all been trained to trust Google and click on the first result no matter what. So ranking models that make slight changes in ranking may not produce significant swings in the measured usage data. The second, more interesting, factor is that users don't know what they're missing.
-
here's the shocker -- these metrics are not very sensitive to new ranking models! When Google tries new ranking models, these metrics sometimes move, sometimes not, and never by much
- ...1 more annotation...
Datawocky: Are Machine-Learned Models Prone to Catastrophic Errors? - 0 views
-
Taleb makes a convincing case that most real-world phenomena we care about actually inhabit Extremistan rather than Mediocristan. In these cases, you can make quite a fool of yourself by assuming that the future looks like the past.
-
The current generation of machine learning algorithms can work well in Mediocristan but not in Extremistan.
-
It has long been known that Google's search algorithm actually works at 2 levels: An offline phase that extracts "signals" from a massive web crawl and usage data. An example of such a signal is page rank. These computations need to be done offline because they analyze massive amounts of data and are time-consuming. Because these signals are extracted offline, and not in response to user queries, these signals are necessarily query-independent. You can think of them tags on the documents in the index. There are about 200 such signals. An online phase, in response to a user query. A subset of documents is identified based on the presence of the user's keywords. Then, these documents are ranked by a very fast algorithm that combines the 200 signals in-memory using a proprietary formula.
- ...2 more annotations...
Collaborative Filtering: Lifeblood of The Social Web - ReadWriteWeb - 0 views
-
This, of course, relies on the fact that people's interests, preferences, and ideologies don't change too drastically over time.
-
A filtering system with preference-based recommendations, in essence, is the future of the social web.
-
The best implementations of a Collaborative Filtering (CF) system along with a preference based recommendation/discovery system that I have seen are always on music streaming and discovery sites.
- ...3 more annotations...
The End Of The Scientific Method… Wha….? « Life as a Physicist - 0 views
-
His basic thesis is that when you have so much data you can map out every connection, every correlation, then the data becomes the model. No need to derive or understand what is actually happening — you have so much data that you can already make all the predictions that a model would let you do in the first place. In short — you no longer need to develop a theory or hypothesis - just map the data!
-
First, in order for this to work you need to have millions and millions and millions of data points. You need, basically, ever single outcome possible, with all possible other factors. Huge amounts of data. That does not apply to all branches of science.
-
The second problem with this approach is you will never discover anything new. The problem with new things is there is no data on them!
- ...3 more annotations...
SCI-E Journal Search - Scientific - 0 views
SCI Journal Search - Scientific - 0 views
Collaborative Filtering Research Papers - 0 views
唐骏:10亿身价的智慧与悲哀 - 0 views
ACM Guide to Computing Literature - 0 views
Social Network Evolution - Sean Percival's Blog - 0 views
-
Some of us run to each new service, play around for a bit and then quickly abandon it.
« First
‹ Previous
141 - 160 of 352
Next ›
Last »
Showing 20▼ items per page