Skip to main content

Home/ Advanced Concepts Team/ Group items tagged people

Rss Feed Group items tagged

tvinko

Massively collaborative mathematics : Article : Nature - 28 views

  •  
    peer-to-peer theorem-proving
  • ...14 more comments...
  •  
    Or: mathematicians catch up with open-source software developers :)
  •  
    "Similar open-source techniques could be applied in fields such as [...] computer science, where the raw materials are informational and can be freely shared online." ... or we could reach the point, unthinkable only few years ago, of being able to exchange text messages in almost real time! OMG, think of the possibilities! Seriously, does the author even browse the internet?
  •  
    I do not agree with you F., you are citing out of context! Sharing messages does not make a collaboration, nor does a forum, .... You need a set of rules and a common objective. This is clearly observable in "some team", where these rules are lacking, making team work inexistent. The additional difficulties here are that it involves people that are almost strangers to each other, and the immateriality of the project. The support they are using (web, wiki) is only secondary. What they achieved is remarkable, disregarding the subject!
  •  
    I think we will just have to agree to disagree then :) Open source developers have been organizing themselves with emails since the early '90s, and most projects (e.g., the Linux kernel) still do not use anything else today. The Linux kernel mailing list gets around 400 messages per day, and they are managing just fine to scale as the number of contributors increases. I agree that what they achieved is remarkable, but it is more for "what" they achieved than "how". What they did does not remotely qualify as "massively" collaborative: again, many open source projects are managed collaboratively by thousands of people, and many of them are in the multi-million lines of code range. My personal opinion of why in the scientific world these open models are having so many difficulties is that the scientific community today is (globally, of course there are many exceptions) a closed, mostly conservative circle of people who are scared of changes. There is also the fact that the barrier of entry in a scientific community is very high, but I think that this should merely scale down the number of people involved and not change the community "qualitatively". I do not think that many research activities are so much more difficult than, e.g., writing an O(1) scheduler for an Operating System or writing a new balancing tree algorithm for efficiently storing files on a filesystem. Then there is the whole issue of scientific publishing, which, in its current form, is nothing more than a racket. No wonder traditional journals are scared to death by these open-science movements.
  •  
    here we go ... nice controversy! but maybe too many things mixed up together - open science journals vs traditional journals, conservatism of science community wrt programmers (to me one of the reasons for this might be the average age of both groups, which is probably more than 10 years apart ...) and then using emailing wrt other collaboration tools .... .... will have to look at the paper now more carefully ... (I am surprised to see no comment from José or Marek here :-)
  •  
    My point about your initial comment is that it is simplistic to infer that emails imply collaborative work. You actually use the word "organize", what does it mean indeed. In the case of Linux, what makes the project work is the rules they set and the management style (hierachy, meritocracy, review). Mailing is just a coordination mean. In collaborations and team work, it is about rules, not only about the technology you use to potentially collaborate. Otherwise, all projects would be successful, and we would noy learn management at school! They did not write they managed the colloboration exclusively because of wikipedia and emails (or other 2.0 technology)! You are missing the part that makes it successful and remarkable as a project. On his blog the guy put a list of 12 rules for this project. None are related to emails, wikipedia, forums ... because that would be lame and your comment would make sense. Following your argumentation, the tools would be sufficient for collaboration. In the ACT, we have plenty of tools, but no team work. QED
  •  
    the question on the ACT team work is one that is coming back continuously and it always so far has boiled down to the question of how much there need and should be a team project to which everybody inthe team contributes in his / her way or how much we should leave smaller, flexible teams within the team form and progress, more following a bottom-up initiative than imposing one from top-down. At this very moment, there are at least 4 to 5 teams with their own tools and mechanisms which are active and operating within the team. - but hey, if there is a real will for one larger project of the team to which all or most members want to contribute, lets go for it .... but in my view, it should be on a convince rather than oblige basis ...
  •  
    It is, though, indicative that some of the team member do not see all the collaboration and team work happening around them. We always leave the small and agile sub-teams to form and organize themselves spontaneously, but clearly this method leaves out some people (be it for their own personal attitude or be it for pure chance) For those cases which we could think to provide the possibility to participate in an alternative, more structured, team work where we actually manage the hierachy, meritocracy and perform the project review (to use Joris words).
  •  
    I am, and was, involved in "collaboration" but I can say from experience that we are mostly a sum of individuals. In the end, it is always one or two individuals doing the job, and other waiting. Sometimes even, some people don't do what they are supposed to do, so nothing happens ... this could not be defined as team work. Don't get me wrong, this is the dynamic of the team and I am OK with it ... in the end it is less work for me :) team = 3 members or more. I am personally not looking for a 15 member team work, and it is not what I meant. Anyway, this is not exactly the subject of the paper.
  •  
    My opinion about this is that a research team, like the ACT, is a group of _people_ and not only brains. What I mean is that people have feelings, hate, anger, envy, sympathy, love, etc about the others. Unfortunately(?), this could lead to situations, where, in theory, a group of brains could work together, but not the same group of people. As far as I am concerned, this happened many times during my ACT period. And this is happening now with me in Delft, where I have the chance to be in an even more international group than the ACT. I do efficient collaborations with those people who are "close" to me not only in scientific interest, but also in some private sense. And I have people around me who have interesting topics and they might need my help and knowledge, but somehow, it just does not work. Simply lack of sympathy. You know what I mean, don't you? About the article: there is nothing new, indeed. However, why it worked: only brains and not the people worked together on a very specific problem. Plus maybe they were motivated by the idea of e-collaboration. No revolution.
  •  
    Joris, maybe I made myself not clear enough, but my point was only tangentially related to the tools. Indeed, it is the original article mention of "development of new online tools" which prompted my reply about emails. Let me try to say it more clearly: my point is that what they accomplished is nothing new methodologically (i.e., online collaboration of a loosely knit group of people), it is something that has been done countless times before. Do you think that now that it is mathematicians who are doing it makes it somehow special or different? Personally, I don't. You should come over to some mailing lists of mathematical open-source software (e.g., SAGE, Pari, ...), there's plenty of online collaborative research going on there :) I also disagree that, as you say, "in the case of Linux, what makes the project work is the rules they set and the management style (hierachy, meritocracy, review)". First of all I think the main engine of any collaboration like this is the objective, i.e., wanting to get something done. Rules emerge from self-organization later on, and they may be completely different from project to project, ranging from almost anarchy to BDFL (benevolent dictator for life) style. Given this kind of variety that can be observed in open-source projects today, I am very skeptical that any kind of management rule can be said to be universal (and I am pretty sure that the overwhelming majority of project organizers never went to any "management school"). Then there is the social aspect that Tamas mentions above. From my personal experience, communities that put technical merit above everything else tend to remain very small and generally become irrelevant. The ability to work and collaborate with others is the main asset the a participant of a community can bring. I've seen many times on the Linux kernel mailing list contributions deemed "technically superior" being disregarded and not considered for inclusion in the kernel because it was clear that
  •  
    hey, just catched up the discussion. For me what is very new is mainly the framework where this collaborative (open) work is applied. I haven't seen this kind of working openly in any other field of academic research (except for the Boinc type project which are very different, because relying on non specialists for the work to be done). This raise several problems, and mainly the one of the credit, which has not really been solved as I read in the wiki (is an article is written, who writes it, what are the names on the paper). They chose to refer to the project, and not to the individual researchers, as a temporary solution... It is not so surprising for me that this type of work has been first done in the domain of mathematics. Perhaps I have an ideal view of this community but it seems that the result obtained is more important than who obtained it... In many areas of research this is not the case, and one reason is how the research is financed. To obtain money you need to have (scientific) credit, and to have credit you need to have papers with your name on it... so this model of research does not fit in my opinion with the way research is governed. Anyway we had a discussion on the Ariadnet on how to use it, and one idea was to do this kind of collaborative research; idea that was quickly abandoned...
  •  
    I don't really see much the problem with giving credit. It is not the first time a group of researchers collectively take credit for a result under a group umbrella, e.g., see Nicolas Bourbaki: http://en.wikipedia.org/wiki/Bourbaki Again, if the research process is completely transparent and publicly accessible there's no way to fake contributions or to give undue credit, and one could cite without problems a group paper in his/her CV, research grant application, etc.
  •  
    Well my point was more that it could be a problem with how the actual system works. Let say you want a grant or a position, then the jury will count the number of papers with you as a first author, and the other papers (at least in France)... and look at the impact factor of these journals. Then you would have to set up a rule for classifying the authors (endless and pointless discussions), and give an impact factor to the group...?
  •  
    it seems that i should visit you guys at estec... :-)
  •  
    urgently!! btw: we will have the ACT christmas dinner on the 9th in the evening ... are you coming?
Luís F. Simões

Why Randomly-Selected Politicians Would Improve Democracy - Technology Review - 4 views

  • If Pluchino sounds familiar, it's because we've talked about him and his pals before in relation to the Peter Principle that incompetence always spreads through big organisations. Back in 2009, he and his buddies created a model that showed how promoting people at random always improves the efficiency of the organisation. These guys went on to win a well-deserved IgNobel prize for this work.
  • Ref: arxiv.org/abs/1103.1224: Accidental Politicians: How Randomly Selected Legislators Can Improve Parliament Efficiency
  •  
    I think I start to understand why Italian politics does so horribly bad...
  •  
    ... because they don't follow this rule!
  •  
    According to the authors we have four types of people in the parlement: 1) intelligent people whose actions produce a gain for both themselves and for other people. 2) helpless/naive people in the top left quadrant whose actions produce a loss for themselves but a gain for others; 3) bandits whose actions produce a gain for themselves but a loss for other people. 4) stupid people in the bottom left quadrant produce a loss for themselves and also for other people. According to the above definition it is clear that their model does not apply to the italian parlament where we only have stupid people and bandits.
Thijs Versloot

The people who change the world... #thenextweb - 1 views

  •  
    Love tip 3.. thats why I am at the ACT of course :) 3. Surround yourself with pros Surround yourself with people who are self-assured, and live life without comprising their core values. These people will rub off on you quickly. finally.. The world is already full of people who obey the status quo. But the people who don't give a fuck are the ones that change the world.
Francesco Biscani

Apple's Mistake - 5 views

  •  
    Nice opinion piece.
  •  
    nice indeed .... especially like: "They make such great stuff, but they're such assholes. Do I really want to support this company? Should Apple care what people like me think? What difference does it make if they alienate a small minority of their users? There are a couple reasons they should care. One is that these users are the people they want as employees. If your company seems evil, the best programmers won't work for you. That hurt Microsoft a lot starting in the 90s. Programmers started to feel sheepish about working there. It seemed like selling out. When people from Microsoft were talking to other programmers and they mentioned where they worked, there were a lot of self-deprecating jokes about having gone over to the dark side. But the real problem for Microsoft wasn't the embarrassment of the people they hired. It was the people they never got. And you know who got them? Google and Apple. If Microsoft was the Empire, they were the Rebel Alliance. And it's largely because they got more of the best people that Google and Apple are doing so much better than Microsoft today. Why are programmers so fussy about their employers' morals? Partly because they can afford to be. The best programmers can work wherever they want. They don't have to work for a company they have qualms about. But the other reason programmers are fussy, I think, is that evil begets stupidity. An organization that wins by exercising power starts to lose the ability to win by doing better work. And it's not fun for a smart person to work in a place where the best ideas aren't the ones that win."
  •  
    Poor programmers can complain, but they will keep developing applications for iPhone as long as their bosses will tell them to do so... From my experience in mobile software development I assure you it's not the pain of the programmer that dictates what is done, but the customer's demand. Even though like this the quality of applications is somewhat worse than it could be, clients won't complain as they have no reference point. And things will stay as they are: apple censoring the applications, clients paying for stuff that "sometimes just does not work" (it's normal, isn't it??), and programmers complaining, but obediently making iPhone apps...
LeopoldS

Operation Socialist: How GCHQ Spies Hacked Belgium's Largest Telco - 4 views

  •  
    interesting story with many juicy details on how they proceed ... (similarly interesting nickname for the "operation" chosen by our british friends) "The spies used the IP addresses they had associated with the engineers as search terms to sift through their surveillance troves, and were quickly able to find what they needed to confirm the employees' identities and target them individually with malware. The confirmation came in the form of Google, Yahoo, and LinkedIn "cookies," tiny unique files that are automatically placed on computers to identify and sometimes track people browsing the Internet, often for advertising purposes. GCHQ maintains a huge repository named MUTANT BROTH that stores billions of these intercepted cookies, which it uses to correlate with IP addresses to determine the identity of a person. GCHQ refers to cookies internally as "target detection identifiers." Top-secret GCHQ documents name three male Belgacom engineers who were identified as targets to attack. The Intercept has confirmed the identities of the men, and contacted each of them prior to the publication of this story; all three declined comment and requested that their identities not be disclosed. GCHQ monitored the browsing habits of the engineers, and geared up to enter the most important and sensitive phase of the secret operation. The agency planned to perform a so-called "Quantum Insert" attack, which involves redirecting people targeted for surveillance to a malicious website that infects their computers with malware at a lightning pace. In this case, the documents indicate that GCHQ set up a malicious page that looked like LinkedIn to trick the Belgacom engineers. (The NSA also uses Quantum Inserts to target people, as The Intercept has previously reported.) A GCHQ document reviewing operations conducted between January and March 2011 noted that the hack on Belgacom was successful, and stated that the agency had obtained access to the company's
  •  
    I knew I wasn't using TOR often enough...
  •  
    Cool! It seems that after all it is best to restrict employees' internet access only to work-critical areas... @Paul TOR works on network level, so it would not help here much as cookies (application level) were exploited.
Ma Ru

I know at least *some* of you will like it... - 13 views

shared by Ma Ru on 29 Mar 10 - Cached
LeopoldS liked it
  •  
  • ...9 more comments...
  •  
    Shit!! I only got 79, should have lied better...
  •  
    My score was obtained with *sincere* answers, don't cheat!
  •  
    ouah, 80...! didn't think i was so nerd...!
  •  
    Dario, Francesco, we're waiting for your scores... are you afraid of the truth??
  •  
    hmm "Low Ranking Nerd. Definitely a nerd but low on the totem pole of nerds." , as of a score of 66
  •  
    I am disappointed!!!!! Shame on me.......
  •  
    Sigh
  •  
    wow!
  •  
    My girlfriend... She must be an archaeological nerd...
  •  
    Great Scott, Leo! Honest answers?? I was kinda expecting Francesco's score, but this...
johannessimon81

Asteroid mining could lead to self-sustaining space stations - VIDEO!!! - 5 views

  •  
    Let's all start up some crazy space companies together: harvest hydrogen on Jupiter, trap black holes as unlimited energy supplies, use high temperatures close to the sun to bake bread! Apparently it is really easy to do just about anything and Deep Space Industries is really good at it. Plus: in their video they show Mars One concepts while referring to ESA and NASA.
  • ...3 more comments...
  •  
    I really wonder what they wanna mine out there? Is there such a high demand on... rocks?! And do they really think they can collect fuel somewhere?
  •  
    Well they want to avoid having to send resources into space and rather make it all in space. The first mission is just to find possible asteroids worth mining and bring some asteroid rocks to Earth for analysis. In 2020 they want to start mining for precious metals (e.g. nickel), water and such.They also want to put up a 3D printer in space so that it would extract, separate and/or fuse asteroidal resources together and then print the needed structures already in space. And even though on earth it's just rocks, in space a tonne of them has an estimated value of 1 million dollars (as opposed to 4000 USD on Earth). Although I like the idea, I would put DSI in the same basket as those Mars One nutters 'cause it's not gonna happen.
  •  
    I will get excited once they demonstrate they can put a random rock into their machine and out comes a bicycle (then the obvious next step is a space station).
  •  
    hmm aside from the technological feasibility, their approach still should be taken as an example, and deserve a little support. By tackling such difficult problems, they will devise innovative stuffs. Plus, even if this doom-to-fail endeavour may still seem you useless, it creates jobs and make people think... it is already a positive! Final word: how is that different from what Planetary Resources plan to do? It is founded by a bunch of so-called "nuts" ... (http://www.planetaryresources.com/team/) ! a little thought: "We must never be afraid to go too far, for success lies just beyond" - Proust
  •  
    I don't think that this proposal is very different from the one by Planetary Resources. My scepticism is rooted in the fact that - at least to my knowledge - fully autonomous mining technology has not even been demonstrated on Earth. I am sure that their proposition is in principle (technically) feasible but at the same time I do not believe that a privately funded company will find enough people to finance a multi-billion dollar R&D project that may or may not lead to an economically sensible outcome, i.e. generate profit (not income - you have to pay back the R&D cost first) within the next 25 years. And on that timescale anything can happen - for all we know we will all be slaves to the singularity by the time they start mining. I do think that people who tackle difficult problems deserve support - and lots of it. It seems however that up till now they have only tackled making a promotional video... About job creation (sorry for the sarcasm): if usefulness is not so important my proposal would be to give shovels to two people - person A digs a hole and person B fills up the same hole at the same time. The good thing about this is that you can increase the number of jobs created simply by handing out more shovels.
dejanpetkow

[1202.5708] The Alcubierre Warp Drive: On the Matter of Matter - 1 views

  •  
    News about the warp drive based on the original Alcubierre metric but with modified shape function. Focus of the reserach was on the interaction between warp bubble and cosmic particles. Result: People on board need shielding. People at the journey's destination might get roasted (by Gamma rays if you want to know).
LeopoldS

Peter Higgs: I wouldn't be productive enough for today's academic system | Science | Th... - 1 views

  •  
    what an interesting personality ... very symathetic Peter Higgs, the British physicist who gave his name to the Higgs boson, believes no university would employ him in today's academic system because he would not be considered "productive" enough.

    The emeritus professor at Edinburgh University, who says he has never sent an email, browsed the internet or even made a mobile phone call, published fewer than 10 papers after his groundbreaking work, which identified the mechanism by which subatomic material acquires mass, was published in 1964.

    He doubts a similar breakthrough could be achieved in today's academic culture, because of the expectations on academics to collaborate and keep churning out papers. He said: "It's difficult to imagine how I would ever have enough peace and quiet in the present sort of climate to do what I did in 1964."

    Speaking to the Guardian en route to Stockholm to receive the 2013 Nobel prize for science, Higgs, 84, said he would almost certainly have been sacked had he not been nominated for the Nobel in 1980.

    Edinburgh University's authorities then took the view, he later learned, that he "might get a Nobel prize - and if he doesn't we can always get rid of him".

    Higgs said he became "an embarrassment to the department when they did research assessment exercises". A message would go around the department saying: "Please give a list of your recent publications." Higgs said: "I would send back a statement: 'None.' "

    By the time he retired in 1996, he was uncomfortable with the new academic culture. "After I retired it was quite a long time before I went back to my department. I thought I was well out of it. It wasn't my way of doing things any more. Today I wouldn't get an academic job. It's as simple as that. I don't think I would be regarded as productive enough."

    Higgs revealed that his career had also been jeopardised by his disagreements in the 1960s and 7
  •  
  •  
    interesting one - Luzi will like it :-)
Luís F. Simões

Bitcoin P2P Currency: The Most Dangerous Project We've Ever Seen - 10 views

  • After month of research and discovery, we’ve learned the following:1. Bitcoin is a technologically sound project.2. Bitcoin is unstoppable without end-user prosecution.3. Bitcoin is the most dangerous open-source project ever created.4. Bitcoin may be the most dangerous technological project since the internet itself.5. Bitcoin is a political statement by technotarians (technological libertarians).*6. Bitcoins will change the world unless governments ban them with harsh penalties.
  • The benefits of a currency like this:a) Your coins can’t be frozen (like a Paypal account can be)b) Your coins can’t be trackedc) Your coins can’t be taxedd) Transaction costs are extremely low (sorry credit card companies)
  • An individual with the name -- or perhaps handle -- of Satoshi Nakamoto first wrote about bitcoins in a paper called Bitcoin: A Peer-to-Peer Electronic Cash System.
  • ...1 more annotation...
  • * We made this term up to describe the “good people” of the internet who believe in the fundamental rights of individuals to be free, have free speech, fight hypocrisy and stand behind logic, technology and science over religion, political structure and tradition. These are the people who build and support things like Wikileaks, Anonymous, Linux and Wikipedia. They think that people can, and should, govern themselves. They are against external forms of control such as DRM, laws that are bought and sold by lobbyists, and religions like Scientology. They include splinter groups that enforce these ideals in the form of hacktivism, such as the takedown of the Sony Playstation Network after Sony tried to prosecute a hacker for unlocking its console.
  •  
    Sounds good!
  • ...9 more comments...
  •  
    wow it's frigthening! it's the dream of every anarchist, every drug, arm, human dealer! the world made as a global fiscal paradise... the idea is clever however it will not replace real money because 1 - no one will build a fortune on bitcoin if a technological breakthrough can ruin them 2 - government never allowed parallel money to flourish on their territory, so it will be almost impossible to change bitcoin against euros or dollars
  •  
    interesting stuff anyone read cryptonomicon by neal stephenson? similar theme.
  •  
    :) yes. One of the comments on reddit was precisely drawing the parallels with Neal Stephenson's Snowcrash / Diamond Age / Cryptonomicon. Interesting stuff indeed. It has a lot of potential for misuse, but also opens up new possibilities. We've discussed recently how emerging technologies will drive social change. Whether it's the likes of NSA / CIA who will benefit the most from the Twitters, Facebooks and so on, by gaining greater power for control, or whether individuals are being empowered to at least an identical degree. We saw last year VISA / PayPal censoring WikiLeaks... Well, here's a way for any individual to support such an organization, in a fully anonymous and uncontrollable way...
  •  
    One of my colleagues has made a nice, short write-up about BitCoin: http://www.pds.ewi.tudelft.nl/~victor/bitcoin.html
  •  
    very nice analysis indeed - thanks Tamas for sharing it!
  •  
    mmm I'm not an expert but it seemed to me that, even if these criticisms are true, there is one fundamental difference between the money you exchange on internet via your bank, and bitcoins. The first one is virtual money and the second one aims at being real, physical, money, even if digital, in the same way as banknotes, coins, or gold.
  •  
    An algorithm wanna-be central bank issuing untraceable tax free money between internet users? not more likely than the end of the world supposed to take place tomorrow, in my opinion. Algorithms don't usually assault women though !:P
  •  
    well, most money is anyway just virtual and only based on expectations and trust ... (see e.g. http://en.wikipedia.org/wiki/Money_supply) and thus if people trust that this "money" has some value in the sense that they can get something of value to them in exchange, then not much more is needed it seems to me ...
  •  
    @Leopold: ok let's use the rigth words then. Bitcoin aim at being a currency ("physical objects generally accepted as a medium of exchange" from wikipedia), different than the "demand deposit". In the article proposed by Tamas he compares what cannot be compared (currencies, demand deposits and their mean of exchange). The interesting question is wether one can create a digital currency which is too difficult to counterfeit. As far as I know, there is no existing digital currency except this bitcoins (and maybe the currencies from games as second life and others, but which are of limited use in real world).
  •  
    well of course money is trust, and even more loans and credit and even more stock and bond markets. It all represents trust and expectations. However since the first banks 500 years ago and the first loans etc. etc., and as well the fact that bonds and currencies bring down whole countries (Greece lately), and are mainly controlled by large financial centres and (central) banks, banks have always been on the winning side no matter what and that isn't going to change easily. So if you are talking about these new currencies it would be a new era, not just a new currency. So should Greece convert its debt to bitcoins ;P ?
  •  
    well, from 1936 to 1993 the central bank of france was owned by the state and was supposed to serve the general interest...
Dario Izzo

Miguel Nicolelis Says the Brain Is Not Computable, Bashes Kurzweil's Singularity | MIT ... - 9 views

  •  
    As I said ten years ago and psychoanalysts 100 years ago. Luis I am so sorry :) Also ... now that the commission funded the project blue brain is a rather big hit Btw Nicolelis is a rather credited neuro-scientist
  • ...14 more comments...
  •  
    nice article; Luzi would agree as well I assume; one aspect not clear to me is the causal relationship it seems to imply between consciousness and randomness ... anybody?
  •  
    This is the same thing Penrose has been saying for ages (and yes, I read the book). IF the human brain proves to be the only conceivable system capable of consciousness/intelligence AND IF we'll forever be limited to the Turing machine type of computation (which is what the "Not Computable" in the article refers to) AND IF the brain indeed is not computable, THEN AI people might need to worry... Because I seriously doubt the first condition will prove to be true, same with the second one, and because I don't really care about the third (brains is not my thing).. I'm not worried.
  •  
    In any case, all AI research is going in the wrong direction: the mainstream is not on how to go beyond Turing machines, rather how to program them well enough ...... and thats not bringing anywhere near the singularity
  •  
    It has not been shown that intelligence is not computable (only some people saying the human brain isn't, which is something different), so I wouldn't go so far as saying the mainstream is going in the wrong direction. But even if that indeed was the case, would it be a problem? If so, well, then someone should quickly go and tell all the people trading in financial markets that they should stop using computers... after all, they're dealing with uncomputable undecidable problems. :) (and research on how to go beyond Turing computation does exist, but how much would you want to devote your research to a non existent machine?)
  •  
    [warning: troll] If you are happy with developing algorithms that serve the financial market ... good for you :) After all they have been proved to be useful for humankind beyond any reasonable doubt.
  •  
    Two comments from me: 1) an apparently credible scientist takes Kurzweil seriously enough to engage with him in polemics... oops 2) what worries me most, I didn't get the retail store pun at the end of article...
  •  
    True, but after Google hired Kurzweil he is de facto being taken seriously ... so I guess Nicolelis reacted to this.
  •  
    Crazy scientist in residence... interesting marketing move, I suppose.
  •  
    Unfortunately, I can't upload my two kids to the cloud to make them sleep, that's why I comment only now :-). But, of course, I MUST add my comment to this discussion. I don't really get what Nicolelis point is, the article is just too short and at a too popular level. But please realize that the question is not just "computable" vs. "non-computable". A system may be computable (we have a collection of rules called "theory" that we can put on a computer and run in a finite time) and still it need not be predictable. Since the lack of predictability pretty obviously applies to the human brain (as it does to any sufficiently complex and nonlinear system) the question whether it is computable or not becomes rather academic. Markram and his fellows may come up with a incredible simulation program of the human brain, this will be rather useless since they cannot solve the initial value problem and even if they could they will be lost in randomness after a short simulation time due to horrible non-linearities... Btw: this is not my idea, it was pointed out by Bohr more than 100 years ago...
  •  
    I guess chaos is what you are referring to. Stuff like the Lorentz attractor. In which case I would say that the point is not to predict one particular brain (in which case you would be right): any initial conditions would be fine as far as any brain gets started :) that is the goal :)
  •  
    Kurzweil talks about downloading your brain to a computer, so he has a specific brain in mind; Markram talks about identifying neural basis of mental diseases, so he has at least pretty specific situations in mind. Chaos is not the only problem, even a perfectly linear brain (which is not a biological brain) is not predictable, since one cannot determine a complete set of initial conditions of a working (viz. living) brain (after having determined about 10% the brain is dead and the data useless). But the situation is even worse: from all we know a brain will only work with a suitable interaction with its environment. So these boundary conditions one has to determine as well. This is already twice impossible. But the situation is worse again: from all we know, the way the brain interacts with its environment at a neural level depends on his history (how this brain learned). So your boundary conditions (that are impossible to determine) depend on your initial conditions (that are impossible to determine). Thus the situation is rather impossible squared than twice impossible. I'm sure Markram will simulate something, but this will rather be the famous Boltzmann brain than a biological one. Boltzman brains work with any initial conditions and any boundary conditions... and are pretty dead!
  •  
    Say one has an accurate model of a brain. It may be the case that the initial and boundary conditions do not matter that much in order for the brain to function an exhibit macro-characteristics useful to make science. Again, if it is not one particular brain you are targeting, but the 'brain' as a general entity this would make sense if one has an accurate model (also to identify the neural basis of mental diseases). But in my opinion, the construction of such a model of the brain is impossible using a reductionist approach (that is taking the naive approach of putting together some artificial neurons and connecting them in a huge net). That is why both Kurzweil and Markram are doomed to fail.
  •  
    I think that in principle some kind of artificial brain should be feasible. But making a brain by just throwing together a myriad of neurons is probably as promising as throwing together some copper pipes and a heap of silica and expecting it to make calculations for you. Like in the biological system, I suspect, an artificial brain would have to grow from a small tiny functional unit by adding neurons and complexity slowly and in a way that in a stable way increases the "usefulness"/fitness. Apparently our brain's usefulness has to do with interpreting inputs of our sensors to the world and steering the body making sure that those sensors, the brain and the rest of the body are still alive 10 seconds from now (thereby changing the world -> sensor inputs -> ...). So the artificial brain might need sensors and a body to affect the "world" creating a much larger feedback loop than the brain itself. One might argue that the complexity of the sensor inputs is the reason why the brain needs to be so complex in the first place. I never quite see from these "artificial brain" proposals in how far they are trying to simulate the whole system and not just the brain. Anyone? Or are they trying to simulate the human brain after it has been removed from the body? That might be somewhat easier I guess...
  •  
    Johannes: "I never quite see from these "artificial brain" proposals in how far they are trying to simulate the whole system and not just the brain." In Artificial Life the whole environment+bodies&brains is simulated. You have also the whole embodied cognition movement that basically advocates for just that: no true intelligence until you model the system in its entirety. And from that you then have people building robotic bodies, and getting their "brains" to learn from scratch how to control them, and through the bodies, the environment. Right now, this is obviously closer to the complexity of insect brains, than human ones. (my take on this is: yes, go ahead and build robots, if the intelligence you want to get in the end is to be displayed in interactions with the real physical world...) It's easy to dismiss Markram's Blue Brain for all their clever marketing pronouncements that they're building a human-level consciousness on a computer, but from what I read of the project, they seem to be developing a platfrom onto which any scientist can plug in their model of a detail of a detail of .... of the human brain, and get it to run together with everyone else's models of other tiny parts of the brain. This is not the same as getting the artificial brain to interact with the real world, but it's a big step in enabling scientists to study their own models on more realistic settings, in which the models' outputs get to effect many other systems, and throuh them feed back into its future inputs. So Blue Brain's biggest contribution might be in making model evaluation in neuroscience less wrong, and that doesn't seem like a bad thing. At some point the reductionist approach needs to start moving in the other direction.
  •  
    @ Dario: absolutely agree, the reductionist approach is the main mistake. My point: if you take the reductionsit approach, then you will face the initial and boundary value problem. If one tries a non-reductionist approach, this problem may be much weaker. But off the record: there exists a non-reductionist theory of the brain, it's called psychology... @ Johannes: also agree, the only way the reductionist approach could eventually be successful is to actually grow the brain. Start with essentially one neuron and grow the whole complexity. But if you want to do this, bring up a kid! A brain without body might be easier? Why do you expect that a brain detached from its complete input/output system actually still works. I'm pretty sure it does not!
  •  
    @Luzi: That was exactly my point :-)
duncan barker

Video - The Great Global Warming Swindle - 2 views

  •  
    joke posting??
  • ...1 more comment...
  •  
    No. People SHOULD look at alternative views to be more informed. Criticism of areas of science is a GOOD thing, it helps science to grow. Unfortunately, when it comes to this issue, people, including ACT members have a closed mind and do not want to listen to alternative view points. But I am sure many people in the group have seen it before. Have you?
  •  
    This is why i always post crazy stuff .... its disruptive. ..... although sometimes its a joke ;)
  •  
    why not this one then? http://www.venganza.org/
Francesco Biscani

Former Google CIO says business misses key people marks | ITworld - 2 views

  • There is a whole cottage industry of people talking about innovation, including all kinds of garbage
  • the more project management you do the less likely your project is to succeed
  • Everyone knew we shouldn't build our own hardware as it was 'dumb', but everyone was wrong.
  • ...1 more annotation...
  • Just because you can do something with technology that doesn't mean you should do something with technology,
  •  
    Some juicy sound bites from the former Google CIO.
nikolas smyrlakis

China breaks ground on space launch center - Yahoo! News - 0 views

  •  
    China broke ground on its fourth space center Monday, highlighting the country's soaring space ambitions six years after it sent its first man into orbit. - .6.000 people had to be relocated for the construction
  •  
    > 6.000 people had to be relocated So what? It's less than 0.0005% of the population...
Francesco Biscani

Tom Sawyer, whitewashing fences, and building communities online - 3 views

  • If you are looking to ideas like open source or social media as simple means to get what you want for your company, it’s time to rethink your community strategy.
  • I’ve talked to people at companies who are considering “open sourcing” their product because they think there is an army of people out there who will jump at the chance to build their products for them. Many of these people go on to learn tough but valuable lessons in building community. It’s not that simple.
  •  
    Illuminating article about corporations trying to exploit "open source" and not getting what they want.
  •  
    I like the red had definition: "To be the catalyst in communities of customers, contributors, and partners creating better technology the open source way."
  •  
    yeah, it is the same with crowdsourcing in general, when some company "managers" see how much cheaper they could do it but don't understand where it comes from...
Marcus Maertens

[1806.03856] Computing the minimal crew for a multi-generational space travel towards P... - 5 views

  •  
    How many people to we actually need put on that ship?
  •  
    We should invite these people to the AF special issue
  •  
    sounds really interesting. their simulations don't account for biological issues (mutation, migration, selection, drift, founder effect) though, so the numbers are very low. this paper (https://ac.els-cdn.com/S0094576513004669/1-s2.0-S0094576513004669-main.pdf?_tid=6bec2a5c-f05f-4024-b4de-af78ab06fd42&acdnat=1531827379_d4f0be1b193873890d6e5b4574e82f2e) takes those effects and their implications on genetic composition of populations into account, but the numbers are enormous. do you have an idea why they (marin and beluffi) wouldn't put those effects into the simulations?
Dario Izzo

Climatologists are no Einsteins, says his successor | NJ.com - 2 views

  •  
    I know at least of a few people who share this point of view :)
  •  
    I think it is worth noting that Dyson's is not saying that climate change is an illusion - it is evident that a lot of CO2 is emitted into the atmosphere and hence something will change. His point is that we just don't know what will change and by how much and that (much) more experimental data is necessary to make predictive models.
  •  
    On missing experimental work: just read in the news that condensation in cirrus clouds has been studied recently and that the models where incorrect as to what the significance of organic substances and soot is in cirrus cloud formation. http://www.sciencemag.org/content/early/2013/05/08/science.1234145
johannessimon81

"Natural Light Cloaking for Aquatic and Terrestrial Creatures" - 3 views

  •  
    Cheap and scalable invisibility cloaks being developed. The setup is so trivial that I would almost call it a "trick" (as in "Magicians trick"): 6 prisms of n=1.78 glass. Nontheless, it does the job of cloaking an object at visible wavelengths and from several directions.
  • ...6 more comments...
  •  
    can we build one?
  •  
    Yes, I just did :-) It is on my desk
  •  
    New video here (smaller file than previous): "https://dl.dropboxusercontent.com/u/58527156/20130613_101701.mp4" Note how close to the center of the field of view the hidden objects are. I am quite surprised that such poor lenses create such a sharp focus.
  •  
    Well.. I would say that it is not "fully cloaking", as the image behind is mirrored as well
  •  
    That just means that you have to double the setup, i.e., put 4 glasses in a row. Of course the obvious drawback is that you can only look at this cloak from one direction.
  •  
    Is this really new? I don't know, but I know that the original idea of cloaking was pretty different. When cloaking as an application of transformation optics became popular people tried to make devices that work for any incidence angle, any polarization and in full wave optics (not just ray approximation). This is really hard to achieve and I guess that the people that tried to make such devices knew exactly that the task becomes almost trivial by dropping at least two of the three conditions above.
  •  
    I think it is very easy to call something trivial when you're not the one who invested considerable time (5 min in my case) to design a cloaking device and fill the coffee mugs with water... Also, I did not really violate that many conditions: true I reduced the number of dimensions in which the device works to 1 (as opposed to the 2 dimensions of many metamaterial cloaks). However the polarization should not be affected in my setup as well as the wave phase and wave vector (so it works in full wave optics) - apart maybe from the imperfect lens distortion, but hey I was improvising.
santecarloni

Focus Forward Films - 3 views

  •  
    Focus Forward is an unprecedented new series of 30 three-minute stories about innovative people who are reshaping the world through act or invention, directed by the world's most celebrated documentary filmmakers.
LeopoldS

Schumpeter: More than just a game | The Economist - 3 views

  •  
    remember the discussion I tried to trigger in the team a few weeks ago ...
  • ...5 more comments...
  •  
    main quote I take from the article: "gamification is really a cover for cynically exploiting human psychology for profit"
  •  
    I would say that it applies to management in general :-)
  •  
    which is exactly why it will never work .... and surprisingly "managers" fail to understand this very simple fact.
  •  
    ... "gamification is really a cover for cynically exploiting human psychology for profit" --> "Why Are Half a Million People Poking This Giant Cube?" http://www.wired.com/gamelife/2012/11/curiosity/
  •  
    I think the "essence" of the game is its uselessness... workers need exactly the inverse, to find a meaning in what they do !
  •  
    I love the linked article provided by Johannes! It expresses very elegantly why I still fail to understand even extremely smart and busy people in my view apparently waiting their time in playing computer games - but I recognise that there is something in games that we apparently need / gives us something we cherish .... "In fact, half a million players so far have registered to help destroy the 64 billion tiny blocks that compose that one gigantic cube, all working in tandem toward a singular goal: discovering the secret that Curiosity's creator says awaits one lucky player inside. That's right: After millions of man-hours of work, only one player will ever see the center of the cube. Curiosity is the first release from 22Cans, an independent game studio founded earlier this year by Peter Molyneux, a longtime game designer known for ambitious projects like Populous, Black & White and Fable. Players can carve important messages (or shameless self-promotion) onto the face of the cube as they whittle it to nothing. Image: Wired Molyneux is equally famous for his tendency to overpromise and under-deliver on his games. In 2008, he said that his upcoming game would be "such a significant scientific achievement that it will be on the cover of Wired." That game turned out to be Milo & Kate, a Kinect tech demo that went nowhere and was canceled. Following this, Molyneux left Microsoft to go indie and form 22Cans. Not held back by the past, the Molyneux hype train is going full speed ahead with Curiosity, which the studio grandiosely promises will be merely the first of 22 similar "experiments." Somehow, it is wildly popular. The biggest challenge facing players of Curiosity isn't how to blast through the 2,000 layers of the cube, but rather successfully connecting to 22Cans' servers. So many players are attempting to log in that the server cannot handle it. Some players go for utter efficiency, tapping rapidly to rack up combo multipliers and get more
  •  
    why are video games so much different than collecting stamps or spotting birds or planes ? One could say they are all just hobbies
1 - 20 of 183 Next › Last »
Showing 20 items per page