Skip to main content

Home/ Advanced Concepts Team/ Group items tagged Wikipedia

Rss Feed Group items tagged

4More

Erdős-Bacon number - Wikipedia, the free encyclopedia - 2 views

  •  
    ever heard of the Erdős-Bacon number? :-)
  • ...1 more comment...
  •  
    There is a tool (http://www.ams.org/mathscinet/collaborationDistance.html) which computes your Erdös number. But who cares about Kevin Bacon?
  •  
    and actors probably ask who cares about Erdős :) The network of actors who co-star in movies is a famous one among networks people. Kevin Bacon became famous in that network because of fans of his who could from memory trace the paths of a large number of actors back to him :) see: http://en.wikipedia.org/wiki/Six_Degrees_of_Kevin_Bacon#History If you have you publications in http://academic.research.microsoft.com/, it gives you a nice tool to visualize your graph up to Erdős. Apparently I have a path of length 4, and several of length 5: http://academic.research.microsoft.com/VisualExplorer#36695545&1112639
  •  
    and for the actors http://oracleofbacon.org/
2More

Lockheed Martin buys first D-Wave quantum computing system - 1 views

  • D-Wave develops computing systems that leverage the physics of quantum mechanics in order to address problems that are hard for traditional methods to solve in a cost-effective amount of time. Examples of such problems include software verification and validation, financial risk analysis, affinity mapping and sentiment analysis, object recognition in images, medical imaging classification, compressed sensing and bioinformatics.
  •  
    According to the company's wikipedia page, the computer costs $ 10 million. Can we then declare Quantum Computing has officially arrived?! quotes from elsewhere in the site: "first commercial quantum computing system on the market"; "our current superconducting 128-qubit processor chip is housed inside a cryogenics system within a 10 square meter shielded room" Link to the company's scientific publications. Interestingly, this company seems to have been running a BOINC project, AQUA@home, to "predict the performance of superconducting adiabatic quantum computers on a variety of hard problems arising in fields ranging from materials science to machine learning. AQUA@home uses Internet-connected computers to help design and analyze quantum computing algorithms, using Quantum Monte Carlo techniques". List of papers coming out of it.
1More

History and philosophy of science - Wikipedia, the free encyclopedia - 0 views

  •  
    I know it is wikipedia but just a short intro about this so-called interdisciplinary and rather new field - what we like - NS
4More

Map of all geo-tagged articles on Wikipedia - 4 views

  •  
    I know you like these... [Edit] And by the way, this website contains also more practical stuff, like this
  • ...1 more comment...
  •  
    they must have tricked the data in favour of Poland ...
  •  
    of course, "they" being Polish Wikipedia contributors who geo-tag like mad...
  •  
    Have you had a look on Japan? It looks like they just geo-tagged all their train stations.
17More

Miguel Nicolelis Says the Brain Is Not Computable, Bashes Kurzweil's Singularity | MIT ... - 9 views

  •  
    As I said ten years ago and psychoanalysts 100 years ago. Luis I am so sorry :) Also ... now that the commission funded the project blue brain is a rather big hit Btw Nicolelis is a rather credited neuro-scientist
  • ...14 more comments...
  •  
    nice article; Luzi would agree as well I assume; one aspect not clear to me is the causal relationship it seems to imply between consciousness and randomness ... anybody?
  •  
    This is the same thing Penrose has been saying for ages (and yes, I read the book). IF the human brain proves to be the only conceivable system capable of consciousness/intelligence AND IF we'll forever be limited to the Turing machine type of computation (which is what the "Not Computable" in the article refers to) AND IF the brain indeed is not computable, THEN AI people might need to worry... Because I seriously doubt the first condition will prove to be true, same with the second one, and because I don't really care about the third (brains is not my thing).. I'm not worried.
  •  
    In any case, all AI research is going in the wrong direction: the mainstream is not on how to go beyond Turing machines, rather how to program them well enough ...... and thats not bringing anywhere near the singularity
  •  
    It has not been shown that intelligence is not computable (only some people saying the human brain isn't, which is something different), so I wouldn't go so far as saying the mainstream is going in the wrong direction. But even if that indeed was the case, would it be a problem? If so, well, then someone should quickly go and tell all the people trading in financial markets that they should stop using computers... after all, they're dealing with uncomputable undecidable problems. :) (and research on how to go beyond Turing computation does exist, but how much would you want to devote your research to a non existent machine?)
  •  
    [warning: troll] If you are happy with developing algorithms that serve the financial market ... good for you :) After all they have been proved to be useful for humankind beyond any reasonable doubt.
  •  
    Two comments from me: 1) an apparently credible scientist takes Kurzweil seriously enough to engage with him in polemics... oops 2) what worries me most, I didn't get the retail store pun at the end of article...
  •  
    True, but after Google hired Kurzweil he is de facto being taken seriously ... so I guess Nicolelis reacted to this.
  •  
    Crazy scientist in residence... interesting marketing move, I suppose.
  •  
    Unfortunately, I can't upload my two kids to the cloud to make them sleep, that's why I comment only now :-). But, of course, I MUST add my comment to this discussion. I don't really get what Nicolelis point is, the article is just too short and at a too popular level. But please realize that the question is not just "computable" vs. "non-computable". A system may be computable (we have a collection of rules called "theory" that we can put on a computer and run in a finite time) and still it need not be predictable. Since the lack of predictability pretty obviously applies to the human brain (as it does to any sufficiently complex and nonlinear system) the question whether it is computable or not becomes rather academic. Markram and his fellows may come up with a incredible simulation program of the human brain, this will be rather useless since they cannot solve the initial value problem and even if they could they will be lost in randomness after a short simulation time due to horrible non-linearities... Btw: this is not my idea, it was pointed out by Bohr more than 100 years ago...
  •  
    I guess chaos is what you are referring to. Stuff like the Lorentz attractor. In which case I would say that the point is not to predict one particular brain (in which case you would be right): any initial conditions would be fine as far as any brain gets started :) that is the goal :)
  •  
    Kurzweil talks about downloading your brain to a computer, so he has a specific brain in mind; Markram talks about identifying neural basis of mental diseases, so he has at least pretty specific situations in mind. Chaos is not the only problem, even a perfectly linear brain (which is not a biological brain) is not predictable, since one cannot determine a complete set of initial conditions of a working (viz. living) brain (after having determined about 10% the brain is dead and the data useless). But the situation is even worse: from all we know a brain will only work with a suitable interaction with its environment. So these boundary conditions one has to determine as well. This is already twice impossible. But the situation is worse again: from all we know, the way the brain interacts with its environment at a neural level depends on his history (how this brain learned). So your boundary conditions (that are impossible to determine) depend on your initial conditions (that are impossible to determine). Thus the situation is rather impossible squared than twice impossible. I'm sure Markram will simulate something, but this will rather be the famous Boltzmann brain than a biological one. Boltzman brains work with any initial conditions and any boundary conditions... and are pretty dead!
  •  
    Say one has an accurate model of a brain. It may be the case that the initial and boundary conditions do not matter that much in order for the brain to function an exhibit macro-characteristics useful to make science. Again, if it is not one particular brain you are targeting, but the 'brain' as a general entity this would make sense if one has an accurate model (also to identify the neural basis of mental diseases). But in my opinion, the construction of such a model of the brain is impossible using a reductionist approach (that is taking the naive approach of putting together some artificial neurons and connecting them in a huge net). That is why both Kurzweil and Markram are doomed to fail.
  •  
    I think that in principle some kind of artificial brain should be feasible. But making a brain by just throwing together a myriad of neurons is probably as promising as throwing together some copper pipes and a heap of silica and expecting it to make calculations for you. Like in the biological system, I suspect, an artificial brain would have to grow from a small tiny functional unit by adding neurons and complexity slowly and in a way that in a stable way increases the "usefulness"/fitness. Apparently our brain's usefulness has to do with interpreting inputs of our sensors to the world and steering the body making sure that those sensors, the brain and the rest of the body are still alive 10 seconds from now (thereby changing the world -> sensor inputs -> ...). So the artificial brain might need sensors and a body to affect the "world" creating a much larger feedback loop than the brain itself. One might argue that the complexity of the sensor inputs is the reason why the brain needs to be so complex in the first place. I never quite see from these "artificial brain" proposals in how far they are trying to simulate the whole system and not just the brain. Anyone? Or are they trying to simulate the human brain after it has been removed from the body? That might be somewhat easier I guess...
  •  
    Johannes: "I never quite see from these "artificial brain" proposals in how far they are trying to simulate the whole system and not just the brain." In Artificial Life the whole environment+bodies&brains is simulated. You have also the whole embodied cognition movement that basically advocates for just that: no true intelligence until you model the system in its entirety. And from that you then have people building robotic bodies, and getting their "brains" to learn from scratch how to control them, and through the bodies, the environment. Right now, this is obviously closer to the complexity of insect brains, than human ones. (my take on this is: yes, go ahead and build robots, if the intelligence you want to get in the end is to be displayed in interactions with the real physical world...) It's easy to dismiss Markram's Blue Brain for all their clever marketing pronouncements that they're building a human-level consciousness on a computer, but from what I read of the project, they seem to be developing a platfrom onto which any scientist can plug in their model of a detail of a detail of .... of the human brain, and get it to run together with everyone else's models of other tiny parts of the brain. This is not the same as getting the artificial brain to interact with the real world, but it's a big step in enabling scientists to study their own models on more realistic settings, in which the models' outputs get to effect many other systems, and throuh them feed back into its future inputs. So Blue Brain's biggest contribution might be in making model evaluation in neuroscience less wrong, and that doesn't seem like a bad thing. At some point the reductionist approach needs to start moving in the other direction.
  •  
    @ Dario: absolutely agree, the reductionist approach is the main mistake. My point: if you take the reductionsit approach, then you will face the initial and boundary value problem. If one tries a non-reductionist approach, this problem may be much weaker. But off the record: there exists a non-reductionist theory of the brain, it's called psychology... @ Johannes: also agree, the only way the reductionist approach could eventually be successful is to actually grow the brain. Start with essentially one neuron and grow the whole complexity. But if you want to do this, bring up a kid! A brain without body might be easier? Why do you expect that a brain detached from its complete input/output system actually still works. I'm pretty sure it does not!
  •  
    @Luzi: That was exactly my point :-)
1More

Netherlands in Proverbs - 3 views

  •  
    Continuing the museum theme... today's Wikipedia Picture of the Day. This might be *the* ultimate test of the knowledge of Dutch... can you name any of them? On the more ACT-like note: I wonder how the contemporary version would look like? P.S. Yes, the proverbs are listed on Wikipedia and yes, lots of them involve herring.
2More

A Different Form of Color Vision in Mantis Shrimp - 4 views

  •  
    Mantis shrimp seem to have 12 types of photo-receptive sensors - but this does not really improve their ability to discriminate between colors. Speculation is that they serve as a form of pre-processing for visual information: the brain does not need to decode full color information from just a few channels which would would allow for a smaller brain. I guess technologically the two extremes of light detection would be RGB cameras which are like our eyes and offer good spatial resolution, and spectrometers which have a large amount of color channels but at the cost of spatial resolution. It seems the mantis shrimp uses something that is somewhere between RGB cameras and spectrometers. Could there be a use for this in space?
  •  
    > RGB cameras which are like our eyes ...apart from the fact that the spectral response of the eyes is completely different from "RGB" cameras (http://en.wikipedia.org/wiki/File:Cones_SMJ2_E.svg) ... and that the eyes have 4 types of light-sensitive cells, not three (http://en.wikipedia.org/wiki/File:Cone-response.svg) ... and that, unlike cameras, human eye is precise only in a very narrow centre region (http://en.wikipedia.org/wiki/Fovea) ...hmm, apart from relying on tri-stimulus colour perception it seems human eyes are in fact completely different from "RGB cameras" :-) OK sorry for picking on this - that's just the colour science geek in me :-) Now seriously, on one hand the article abstract sounds very interesting, but on the other the statement "Why use 12 color channels when three or four are sufficient for fine color discrimination?" reveals so much ignorance to the very basics of colour science that I'm completely puzzled - in the end, it's a Science article so it should be reasonably scientifically sound, right? Pity I can't access full text... the interesting thing is that more channels mean more information and therefore should require *more* power to process - which is exactly opposite to their theory (as far as I can tell it from the abstract...). So the key is to understand *what* information about light these mantises are collecting and why - definitely it's not "colour" in the sense of human perceptual experience. But in any case - yes, spectrometry has its uses in space :-)
17More

Massively collaborative mathematics : Article : Nature - 28 views

  •  
    peer-to-peer theorem-proving
  • ...14 more comments...
  •  
    Or: mathematicians catch up with open-source software developers :)
  •  
    "Similar open-source techniques could be applied in fields such as [...] computer science, where the raw materials are informational and can be freely shared online." ... or we could reach the point, unthinkable only few years ago, of being able to exchange text messages in almost real time! OMG, think of the possibilities! Seriously, does the author even browse the internet?
  •  
    I do not agree with you F., you are citing out of context! Sharing messages does not make a collaboration, nor does a forum, .... You need a set of rules and a common objective. This is clearly observable in "some team", where these rules are lacking, making team work inexistent. The additional difficulties here are that it involves people that are almost strangers to each other, and the immateriality of the project. The support they are using (web, wiki) is only secondary. What they achieved is remarkable, disregarding the subject!
  •  
    I think we will just have to agree to disagree then :) Open source developers have been organizing themselves with emails since the early '90s, and most projects (e.g., the Linux kernel) still do not use anything else today. The Linux kernel mailing list gets around 400 messages per day, and they are managing just fine to scale as the number of contributors increases. I agree that what they achieved is remarkable, but it is more for "what" they achieved than "how". What they did does not remotely qualify as "massively" collaborative: again, many open source projects are managed collaboratively by thousands of people, and many of them are in the multi-million lines of code range. My personal opinion of why in the scientific world these open models are having so many difficulties is that the scientific community today is (globally, of course there are many exceptions) a closed, mostly conservative circle of people who are scared of changes. There is also the fact that the barrier of entry in a scientific community is very high, but I think that this should merely scale down the number of people involved and not change the community "qualitatively". I do not think that many research activities are so much more difficult than, e.g., writing an O(1) scheduler for an Operating System or writing a new balancing tree algorithm for efficiently storing files on a filesystem. Then there is the whole issue of scientific publishing, which, in its current form, is nothing more than a racket. No wonder traditional journals are scared to death by these open-science movements.
  •  
    here we go ... nice controversy! but maybe too many things mixed up together - open science journals vs traditional journals, conservatism of science community wrt programmers (to me one of the reasons for this might be the average age of both groups, which is probably more than 10 years apart ...) and then using emailing wrt other collaboration tools .... .... will have to look at the paper now more carefully ... (I am surprised to see no comment from José or Marek here :-)
  •  
    My point about your initial comment is that it is simplistic to infer that emails imply collaborative work. You actually use the word "organize", what does it mean indeed. In the case of Linux, what makes the project work is the rules they set and the management style (hierachy, meritocracy, review). Mailing is just a coordination mean. In collaborations and team work, it is about rules, not only about the technology you use to potentially collaborate. Otherwise, all projects would be successful, and we would noy learn management at school! They did not write they managed the colloboration exclusively because of wikipedia and emails (or other 2.0 technology)! You are missing the part that makes it successful and remarkable as a project. On his blog the guy put a list of 12 rules for this project. None are related to emails, wikipedia, forums ... because that would be lame and your comment would make sense. Following your argumentation, the tools would be sufficient for collaboration. In the ACT, we have plenty of tools, but no team work. QED
  •  
    the question on the ACT team work is one that is coming back continuously and it always so far has boiled down to the question of how much there need and should be a team project to which everybody inthe team contributes in his / her way or how much we should leave smaller, flexible teams within the team form and progress, more following a bottom-up initiative than imposing one from top-down. At this very moment, there are at least 4 to 5 teams with their own tools and mechanisms which are active and operating within the team. - but hey, if there is a real will for one larger project of the team to which all or most members want to contribute, lets go for it .... but in my view, it should be on a convince rather than oblige basis ...
  •  
    It is, though, indicative that some of the team member do not see all the collaboration and team work happening around them. We always leave the small and agile sub-teams to form and organize themselves spontaneously, but clearly this method leaves out some people (be it for their own personal attitude or be it for pure chance) For those cases which we could think to provide the possibility to participate in an alternative, more structured, team work where we actually manage the hierachy, meritocracy and perform the project review (to use Joris words).
  •  
    I am, and was, involved in "collaboration" but I can say from experience that we are mostly a sum of individuals. In the end, it is always one or two individuals doing the job, and other waiting. Sometimes even, some people don't do what they are supposed to do, so nothing happens ... this could not be defined as team work. Don't get me wrong, this is the dynamic of the team and I am OK with it ... in the end it is less work for me :) team = 3 members or more. I am personally not looking for a 15 member team work, and it is not what I meant. Anyway, this is not exactly the subject of the paper.
  •  
    My opinion about this is that a research team, like the ACT, is a group of _people_ and not only brains. What I mean is that people have feelings, hate, anger, envy, sympathy, love, etc about the others. Unfortunately(?), this could lead to situations, where, in theory, a group of brains could work together, but not the same group of people. As far as I am concerned, this happened many times during my ACT period. And this is happening now with me in Delft, where I have the chance to be in an even more international group than the ACT. I do efficient collaborations with those people who are "close" to me not only in scientific interest, but also in some private sense. And I have people around me who have interesting topics and they might need my help and knowledge, but somehow, it just does not work. Simply lack of sympathy. You know what I mean, don't you? About the article: there is nothing new, indeed. However, why it worked: only brains and not the people worked together on a very specific problem. Plus maybe they were motivated by the idea of e-collaboration. No revolution.
  •  
    Joris, maybe I made myself not clear enough, but my point was only tangentially related to the tools. Indeed, it is the original article mention of "development of new online tools" which prompted my reply about emails. Let me try to say it more clearly: my point is that what they accomplished is nothing new methodologically (i.e., online collaboration of a loosely knit group of people), it is something that has been done countless times before. Do you think that now that it is mathematicians who are doing it makes it somehow special or different? Personally, I don't. You should come over to some mailing lists of mathematical open-source software (e.g., SAGE, Pari, ...), there's plenty of online collaborative research going on there :) I also disagree that, as you say, "in the case of Linux, what makes the project work is the rules they set and the management style (hierachy, meritocracy, review)". First of all I think the main engine of any collaboration like this is the objective, i.e., wanting to get something done. Rules emerge from self-organization later on, and they may be completely different from project to project, ranging from almost anarchy to BDFL (benevolent dictator for life) style. Given this kind of variety that can be observed in open-source projects today, I am very skeptical that any kind of management rule can be said to be universal (and I am pretty sure that the overwhelming majority of project organizers never went to any "management school"). Then there is the social aspect that Tamas mentions above. From my personal experience, communities that put technical merit above everything else tend to remain very small and generally become irrelevant. The ability to work and collaborate with others is the main asset the a participant of a community can bring. I've seen many times on the Linux kernel mailing list contributions deemed "technically superior" being disregarded and not considered for inclusion in the kernel because it was clear that
  •  
    hey, just catched up the discussion. For me what is very new is mainly the framework where this collaborative (open) work is applied. I haven't seen this kind of working openly in any other field of academic research (except for the Boinc type project which are very different, because relying on non specialists for the work to be done). This raise several problems, and mainly the one of the credit, which has not really been solved as I read in the wiki (is an article is written, who writes it, what are the names on the paper). They chose to refer to the project, and not to the individual researchers, as a temporary solution... It is not so surprising for me that this type of work has been first done in the domain of mathematics. Perhaps I have an ideal view of this community but it seems that the result obtained is more important than who obtained it... In many areas of research this is not the case, and one reason is how the research is financed. To obtain money you need to have (scientific) credit, and to have credit you need to have papers with your name on it... so this model of research does not fit in my opinion with the way research is governed. Anyway we had a discussion on the Ariadnet on how to use it, and one idea was to do this kind of collaborative research; idea that was quickly abandoned...
  •  
    I don't really see much the problem with giving credit. It is not the first time a group of researchers collectively take credit for a result under a group umbrella, e.g., see Nicolas Bourbaki: http://en.wikipedia.org/wiki/Bourbaki Again, if the research process is completely transparent and publicly accessible there's no way to fake contributions or to give undue credit, and one could cite without problems a group paper in his/her CV, research grant application, etc.
  •  
    Well my point was more that it could be a problem with how the actual system works. Let say you want a grant or a position, then the jury will count the number of papers with you as a first author, and the other papers (at least in France)... and look at the impact factor of these journals. Then you would have to set up a rule for classifying the authors (endless and pointless discussions), and give an impact factor to the group...?
  •  
    it seems that i should visit you guys at estec... :-)
  •  
    urgently!! btw: we will have the ACT christmas dinner on the 9th in the evening ... are you coming?
1More

Top 100 Most Visited Articles on Wikipedia in 2009 - 2 views

  •  
    the most impressive thing about this blog, TechXav is that it is run by 14-15 year olds..... And it's not only TechXav what they are doing, wow
16More

Bitcoin P2P Currency: The Most Dangerous Project We've Ever Seen - 10 views

  • After month of research and discovery, we’ve learned the following:1. Bitcoin is a technologically sound project.2. Bitcoin is unstoppable without end-user prosecution.3. Bitcoin is the most dangerous open-source project ever created.4. Bitcoin may be the most dangerous technological project since the internet itself.5. Bitcoin is a political statement by technotarians (technological libertarians).*6. Bitcoins will change the world unless governments ban them with harsh penalties.
  • The benefits of a currency like this:a) Your coins can’t be frozen (like a Paypal account can be)b) Your coins can’t be trackedc) Your coins can’t be taxedd) Transaction costs are extremely low (sorry credit card companies)
  • An individual with the name -- or perhaps handle -- of Satoshi Nakamoto first wrote about bitcoins in a paper called Bitcoin: A Peer-to-Peer Electronic Cash System.
  • ...1 more annotation...
  • * We made this term up to describe the “good people” of the internet who believe in the fundamental rights of individuals to be free, have free speech, fight hypocrisy and stand behind logic, technology and science over religion, political structure and tradition. These are the people who build and support things like Wikileaks, Anonymous, Linux and Wikipedia. They think that people can, and should, govern themselves. They are against external forms of control such as DRM, laws that are bought and sold by lobbyists, and religions like Scientology. They include splinter groups that enforce these ideals in the form of hacktivism, such as the takedown of the Sony Playstation Network after Sony tried to prosecute a hacker for unlocking its console.
  •  
    Sounds good!
  • ...9 more comments...
  •  
    wow it's frigthening! it's the dream of every anarchist, every drug, arm, human dealer! the world made as a global fiscal paradise... the idea is clever however it will not replace real money because 1 - no one will build a fortune on bitcoin if a technological breakthrough can ruin them 2 - government never allowed parallel money to flourish on their territory, so it will be almost impossible to change bitcoin against euros or dollars
  •  
    interesting stuff anyone read cryptonomicon by neal stephenson? similar theme.
  •  
    :) yes. One of the comments on reddit was precisely drawing the parallels with Neal Stephenson's Snowcrash / Diamond Age / Cryptonomicon. Interesting stuff indeed. It has a lot of potential for misuse, but also opens up new possibilities. We've discussed recently how emerging technologies will drive social change. Whether it's the likes of NSA / CIA who will benefit the most from the Twitters, Facebooks and so on, by gaining greater power for control, or whether individuals are being empowered to at least an identical degree. We saw last year VISA / PayPal censoring WikiLeaks... Well, here's a way for any individual to support such an organization, in a fully anonymous and uncontrollable way...
  •  
    One of my colleagues has made a nice, short write-up about BitCoin: http://www.pds.ewi.tudelft.nl/~victor/bitcoin.html
  •  
    very nice analysis indeed - thanks Tamas for sharing it!
  •  
    mmm I'm not an expert but it seemed to me that, even if these criticisms are true, there is one fundamental difference between the money you exchange on internet via your bank, and bitcoins. The first one is virtual money and the second one aims at being real, physical, money, even if digital, in the same way as banknotes, coins, or gold.
  •  
    An algorithm wanna-be central bank issuing untraceable tax free money between internet users? not more likely than the end of the world supposed to take place tomorrow, in my opinion. Algorithms don't usually assault women though !:P
  •  
    well, most money is anyway just virtual and only based on expectations and trust ... (see e.g. http://en.wikipedia.org/wiki/Money_supply) and thus if people trust that this "money" has some value in the sense that they can get something of value to them in exchange, then not much more is needed it seems to me ...
  •  
    @Leopold: ok let's use the rigth words then. Bitcoin aim at being a currency ("physical objects generally accepted as a medium of exchange" from wikipedia), different than the "demand deposit". In the article proposed by Tamas he compares what cannot be compared (currencies, demand deposits and their mean of exchange). The interesting question is wether one can create a digital currency which is too difficult to counterfeit. As far as I know, there is no existing digital currency except this bitcoins (and maybe the currencies from games as second life and others, but which are of limited use in real world).
  •  
    well of course money is trust, and even more loans and credit and even more stock and bond markets. It all represents trust and expectations. However since the first banks 500 years ago and the first loans etc. etc., and as well the fact that bonds and currencies bring down whole countries (Greece lately), and are mainly controlled by large financial centres and (central) banks, banks have always been on the winning side no matter what and that isn't going to change easily. So if you are talking about these new currencies it would be a new era, not just a new currency. So should Greece convert its debt to bitcoins ;P ?
  •  
    well, from 1936 to 1993 the central bank of france was owned by the state and was supposed to serve the general interest...
2More

Is color vision defined by language? "The Himba tribe" - BBC Horizon - 2 views

  •  
    Yeah that's interesting stuff... We have one prof in the lab who used to do some research related exactly to this (http://www.tech.plym.ac.uk/socce/staff/tonybelpaeme/research.html). Similar question (i.e. if/how language is involved in the formation of a concept) is also valid for numbers, see for instance this recent story: http://www.newscientist.com/article/dn20095-without-language-numbers-make-no-sense.html
1More

List anonymous wikipedia edits from interesting organizations - 1 views

  •  
    List anonymous wikipedia edits from interesting organizations
4More

Chabot: Elbot the Robot - 2 views

shared by Alexandre Kling on 02 Nov 12 - Cached
  •  
    Hey Guys, Is one of you has any idea about how it's coded? I mean, is it just a basic database of already-written answers or something more sophisticated? Anyway, have fun! Alex
  • ...1 more comment...
  •  
    I assume it's one more descendant of ELIZA, so, database + pattern matching. See the Loebner Prize for the state of the art in similar chatbots.
  •  
    Hi, thanks for your answer. I had a look to the different versions, that's pretty interesting to see how they have evolved over the years.
  •  
    I was not at all impressed - stopped after a few questions since getting ridiculous answers or trials to change topic
2More

List of selfie-related injuries and deaths - Wikipedia, the free encyclopedia - 4 views

  •  
    Be careful .... new technologies are killing us!!!
  •  
    New technologies, old stupidity. I remember the Polish couple one from the news... Horrific, kids left traumatised for life...
1More

Some movement toward academic spring here in UK - 2 views

  •  
    "Giving people the right to roam freely over publicly funded research will usher in a new era of academic discovery and collaboration, and will put the UK at the very forefront of open research".
2More

Web 2.0 Suicide Machine - Wikipedia, the free encyclopedia - 3 views

  •  
    you will love this one ...
  •  
    Hilarious! Perhaps I should join Facebook and Twitter just to commit suicide afterwards!!
2More

Norway loves electric cars - 0 views

  •  
    The main reasons: (1) awareness, people know that a variety of consumer cars exist (2) negative incentives that push people away from gasoline powered cars, eg fuel taxes (3) positive incentives, exemption from road tax, purchase tax and free parking (all temporary) and (4) extensive recharging infrastructure. Other countries have some/all of these elements, but Norway has pushes mostly and the result is that the nissan leaf was the best sold car in September and October, beating all other cars.
  •  
    If there's anyone who could afford such things, it is Norway... According to http://xkcd.com/980/, Oljefondet (http://en.wikipedia.org/wiki/Government_Pension_Fund_of_Norway) is currently worth nearly as much as US has spent on wars. I mean, all of them together... One of the biggest problems in Norway is what to do with this money without damaging the economy in the long run :-)
1More

Abandoned Remains of the Russian Space Shuttle Project Buran - 0 views

4More

The Fallout of a Helium-3 Crisis : Discovery News - 3 views

  • So short in fact, that last year when the looming crisis, which reporters had been covering for years, became official, the price of helium-3 went from $150 per liter to $5,000 per liter.
  • The science, medical and security uses for helium-3 are so diverse that the crisis banded together a hodge-podge of universities, hospitals and government departments to try and find workable alternatives and engineer ways to recycle the gas they do have.
  •  
    So, which shall it be? Are we going to increase the production of hydrogen bombs, or can we finally go back to the Moon (http://en.wikipedia.org/wiki/Helium-3#Extraterrestrial_supplies) ?
  •  
    None of these. Either you recycle, or you take it from natural sources on Earth. Although most people don't know - there is plenty of natural He3 on Earth. It's just nonsens to use it for energy production (in fusion reactors) since the energy belance for getting the He3 from these source on Earth is just negative. Or you try to substitute He3.
3More

DIRECT - Wikipedia, the free encyclopedia - 0 views

  • DIRECT is a proposed alternative Shuttle-Derived Launch Vehicle architecture supporting NASA's Vision for Space Exploration, which would replace the space agency's planned Ares I and Ares V rockets with a family of launch vehicles named "Jupiter."
  • DIRECT is advocated by a group of space enthusiasts that asserts it represents a broader team of dozens of NASA and space industry engineers who actively work on the proposal on an anonymous, volunteer basis in their spare time.
  •  
    Just read about this, it looks like an interesting example of bottom-up innovation and self-organization.
1 - 20 of 70 Next › Last »
Showing 20 items per page