Skip to main content

Home/ Advanced Concepts Team/ Group items matching "self" in title, tags, annotations or url

Group items matching
in title, tags, annotations or url

Sort By: Relevance | Date Filter: All | Bookmarks | Topics Simple Middle
tvinko

Massively collaborative mathematics : Article : Nature - 28 views

  •  
    peer-to-peer theorem-proving
  • ...14 more comments...
  •  
    Or: mathematicians catch up with open-source software developers :)
  •  
    "Similar open-source techniques could be applied in fields such as [...] computer science, where the raw materials are informational and can be freely shared online." ... or we could reach the point, unthinkable only few years ago, of being able to exchange text messages in almost real time! OMG, think of the possibilities! Seriously, does the author even browse the internet?
  •  
    I do not agree with you F., you are citing out of context! Sharing messages does not make a collaboration, nor does a forum, .... You need a set of rules and a common objective. This is clearly observable in "some team", where these rules are lacking, making team work inexistent. The additional difficulties here are that it involves people that are almost strangers to each other, and the immateriality of the project. The support they are using (web, wiki) is only secondary. What they achieved is remarkable, disregarding the subject!
  •  
    I think we will just have to agree to disagree then :) Open source developers have been organizing themselves with emails since the early '90s, and most projects (e.g., the Linux kernel) still do not use anything else today. The Linux kernel mailing list gets around 400 messages per day, and they are managing just fine to scale as the number of contributors increases. I agree that what they achieved is remarkable, but it is more for "what" they achieved than "how". What they did does not remotely qualify as "massively" collaborative: again, many open source projects are managed collaboratively by thousands of people, and many of them are in the multi-million lines of code range. My personal opinion of why in the scientific world these open models are having so many difficulties is that the scientific community today is (globally, of course there are many exceptions) a closed, mostly conservative circle of people who are scared of changes. There is also the fact that the barrier of entry in a scientific community is very high, but I think that this should merely scale down the number of people involved and not change the community "qualitatively". I do not think that many research activities are so much more difficult than, e.g., writing an O(1) scheduler for an Operating System or writing a new balancing tree algorithm for efficiently storing files on a filesystem. Then there is the whole issue of scientific publishing, which, in its current form, is nothing more than a racket. No wonder traditional journals are scared to death by these open-science movements.
  •  
    here we go ... nice controversy! but maybe too many things mixed up together - open science journals vs traditional journals, conservatism of science community wrt programmers (to me one of the reasons for this might be the average age of both groups, which is probably more than 10 years apart ...) and then using emailing wrt other collaboration tools .... .... will have to look at the paper now more carefully ... (I am surprised to see no comment from José or Marek here :-)
  •  
    My point about your initial comment is that it is simplistic to infer that emails imply collaborative work. You actually use the word "organize", what does it mean indeed. In the case of Linux, what makes the project work is the rules they set and the management style (hierachy, meritocracy, review). Mailing is just a coordination mean. In collaborations and team work, it is about rules, not only about the technology you use to potentially collaborate. Otherwise, all projects would be successful, and we would noy learn management at school! They did not write they managed the colloboration exclusively because of wikipedia and emails (or other 2.0 technology)! You are missing the part that makes it successful and remarkable as a project. On his blog the guy put a list of 12 rules for this project. None are related to emails, wikipedia, forums ... because that would be lame and your comment would make sense. Following your argumentation, the tools would be sufficient for collaboration. In the ACT, we have plenty of tools, but no team work. QED
  •  
    the question on the ACT team work is one that is coming back continuously and it always so far has boiled down to the question of how much there need and should be a team project to which everybody inthe team contributes in his / her way or how much we should leave smaller, flexible teams within the team form and progress, more following a bottom-up initiative than imposing one from top-down. At this very moment, there are at least 4 to 5 teams with their own tools and mechanisms which are active and operating within the team. - but hey, if there is a real will for one larger project of the team to which all or most members want to contribute, lets go for it .... but in my view, it should be on a convince rather than oblige basis ...
  •  
    It is, though, indicative that some of the team member do not see all the collaboration and team work happening around them. We always leave the small and agile sub-teams to form and organize themselves spontaneously, but clearly this method leaves out some people (be it for their own personal attitude or be it for pure chance) For those cases which we could think to provide the possibility to participate in an alternative, more structured, team work where we actually manage the hierachy, meritocracy and perform the project review (to use Joris words).
  •  
    I am, and was, involved in "collaboration" but I can say from experience that we are mostly a sum of individuals. In the end, it is always one or two individuals doing the job, and other waiting. Sometimes even, some people don't do what they are supposed to do, so nothing happens ... this could not be defined as team work. Don't get me wrong, this is the dynamic of the team and I am OK with it ... in the end it is less work for me :) team = 3 members or more. I am personally not looking for a 15 member team work, and it is not what I meant. Anyway, this is not exactly the subject of the paper.
  •  
    My opinion about this is that a research team, like the ACT, is a group of _people_ and not only brains. What I mean is that people have feelings, hate, anger, envy, sympathy, love, etc about the others. Unfortunately(?), this could lead to situations, where, in theory, a group of brains could work together, but not the same group of people. As far as I am concerned, this happened many times during my ACT period. And this is happening now with me in Delft, where I have the chance to be in an even more international group than the ACT. I do efficient collaborations with those people who are "close" to me not only in scientific interest, but also in some private sense. And I have people around me who have interesting topics and they might need my help and knowledge, but somehow, it just does not work. Simply lack of sympathy. You know what I mean, don't you? About the article: there is nothing new, indeed. However, why it worked: only brains and not the people worked together on a very specific problem. Plus maybe they were motivated by the idea of e-collaboration. No revolution.
  •  
    Joris, maybe I made myself not clear enough, but my point was only tangentially related to the tools. Indeed, it is the original article mention of "development of new online tools" which prompted my reply about emails. Let me try to say it more clearly: my point is that what they accomplished is nothing new methodologically (i.e., online collaboration of a loosely knit group of people), it is something that has been done countless times before. Do you think that now that it is mathematicians who are doing it makes it somehow special or different? Personally, I don't. You should come over to some mailing lists of mathematical open-source software (e.g., SAGE, Pari, ...), there's plenty of online collaborative research going on there :) I also disagree that, as you say, "in the case of Linux, what makes the project work is the rules they set and the management style (hierachy, meritocracy, review)". First of all I think the main engine of any collaboration like this is the objective, i.e., wanting to get something done. Rules emerge from self-organization later on, and they may be completely different from project to project, ranging from almost anarchy to BDFL (benevolent dictator for life) style. Given this kind of variety that can be observed in open-source projects today, I am very skeptical that any kind of management rule can be said to be universal (and I am pretty sure that the overwhelming majority of project organizers never went to any "management school"). Then there is the social aspect that Tamas mentions above. From my personal experience, communities that put technical merit above everything else tend to remain very small and generally become irrelevant. The ability to work and collaborate with others is the main asset the a participant of a community can bring. I've seen many times on the Linux kernel mailing list contributions deemed "technically superior" being disregarded and not considered for inclusion in the kernel because it was clear that
  •  
    hey, just catched up the discussion. For me what is very new is mainly the framework where this collaborative (open) work is applied. I haven't seen this kind of working openly in any other field of academic research (except for the Boinc type project which are very different, because relying on non specialists for the work to be done). This raise several problems, and mainly the one of the credit, which has not really been solved as I read in the wiki (is an article is written, who writes it, what are the names on the paper). They chose to refer to the project, and not to the individual researchers, as a temporary solution... It is not so surprising for me that this type of work has been first done in the domain of mathematics. Perhaps I have an ideal view of this community but it seems that the result obtained is more important than who obtained it... In many areas of research this is not the case, and one reason is how the research is financed. To obtain money you need to have (scientific) credit, and to have credit you need to have papers with your name on it... so this model of research does not fit in my opinion with the way research is governed. Anyway we had a discussion on the Ariadnet on how to use it, and one idea was to do this kind of collaborative research; idea that was quickly abandoned...
  •  
    I don't really see much the problem with giving credit. It is not the first time a group of researchers collectively take credit for a result under a group umbrella, e.g., see Nicolas Bourbaki: http://en.wikipedia.org/wiki/Bourbaki Again, if the research process is completely transparent and publicly accessible there's no way to fake contributions or to give undue credit, and one could cite without problems a group paper in his/her CV, research grant application, etc.
  •  
    Well my point was more that it could be a problem with how the actual system works. Let say you want a grant or a position, then the jury will count the number of papers with you as a first author, and the other papers (at least in France)... and look at the impact factor of these journals. Then you would have to set up a rule for classifying the authors (endless and pointless discussions), and give an impact factor to the group...?
  •  
    it seems that i should visit you guys at estec... :-)
  •  
    urgently!! btw: we will have the ACT christmas dinner on the 9th in the evening ... are you coming?
jcunha

Maze-solving automatons can repair broken circuits - 1 views

  •  
    Researchers in Bangalore, India together with the Indian Space Research organization come up with an intelligent self-healing algorithm that can locate open-circuits faults and repair them in real-time. They used an insulating silicon oil containing conductive particles. Whenever a fault happens, an electric field develops there, causing the fluid to move in a 'thermodynamic automaton' way repairing the fault. The researchers make clear it could be one advantage for electronics in harsh environments, such as in space satellites.
Chritos Vezyri

New fabrication technique could provide breakthrough for solar energy systems - 3 views

  •  
    The principle behind that is Nantenna.
  •  
    this is fantastic!!!! waiting of somebody to make this happen since years The size of the gap is critical because it creates an ultra-fast tunnel junction between the rectenna's two electrodes, allowing a maximum transfer of electricity. The nanosized gap gives energized electrons on the rectenna just enough time to tunnel to the opposite electrode before their electrical current reverses and they try to go back. The triangular tip of the rectenna makes it hard for the electrons to reverse direction, thus capturing the energy and rectifying it to a unidirectional current. Impressively, the rectennas, because of their extremely small and fast tunnel diodes, are capable of converting solar radiation in the infrared region through the extremely fast and short wavelengths of visible light - something that has never been accomplished before. Silicon solar panels, by comparison, have a single band gap which, loosely speaking, allows the panel to convert electromagnetic radiation efficiently at only one small portion of the solar spectrum. The rectenna devices don't rely on a band gap and may be tuned to harvest light over the whole solar spectrum, creating maximum efficiency. Through atomic layer deposition, Willis has shown he is able to precisely coat the tip of the rectenna with layers of individual copper atoms until a gap of about 1.5 nanometers is achieved. The process is self-limiting and stops at 1.5 nanometer separation The size of the gap is critical because it creates an ultra-fast tunnel junction between the rectenna's two electrodes, allowing a maximum transfer of electricity. The nanosized gap gives energized electrons on the rectenna just enough time to tunnel to the opposite electrode before their electrical current reverses and they try to go back. The triangular tip of the rectenna makes it hard for the electrons to reverse direction, thus capturing the energy and rectifying it to a unidirectional current. Impressively, the rectennas, because of th
jcunha

Researchers design metamaterial that buckles selectively - 4 views

  •  
    New 3D printed macro structure exhibits selective buckling open the way for custom shape-memory materials found our neighbor scientists from the Lorentz Institut of the Leiden University. Wonder if it can be applied for self-assembled deployment of structures.
Francesco Biscani

Apple's Mistake - 5 views

  •  
    Nice opinion piece.
  •  
    nice indeed .... especially like: "They make such great stuff, but they're such assholes. Do I really want to support this company? Should Apple care what people like me think? What difference does it make if they alienate a small minority of their users? There are a couple reasons they should care. One is that these users are the people they want as employees. If your company seems evil, the best programmers won't work for you. That hurt Microsoft a lot starting in the 90s. Programmers started to feel sheepish about working there. It seemed like selling out. When people from Microsoft were talking to other programmers and they mentioned where they worked, there were a lot of self-deprecating jokes about having gone over to the dark side. But the real problem for Microsoft wasn't the embarrassment of the people they hired. It was the people they never got. And you know who got them? Google and Apple. If Microsoft was the Empire, they were the Rebel Alliance. And it's largely because they got more of the best people that Google and Apple are doing so much better than Microsoft today. Why are programmers so fussy about their employers' morals? Partly because they can afford to be. The best programmers can work wherever they want. They don't have to work for a company they have qualms about. But the other reason programmers are fussy, I think, is that evil begets stupidity. An organization that wins by exercising power starts to lose the ability to win by doing better work. And it's not fun for a smart person to work in a place where the best ideas aren't the ones that win."
  •  
    Poor programmers can complain, but they will keep developing applications for iPhone as long as their bosses will tell them to do so... From my experience in mobile software development I assure you it's not the pain of the programmer that dictates what is done, but the customer's demand. Even though like this the quality of applications is somewhat worse than it could be, clients won't complain as they have no reference point. And things will stay as they are: apple censoring the applications, clients paying for stuff that "sometimes just does not work" (it's normal, isn't it??), and programmers complaining, but obediently making iPhone apps...
LeopoldS

Short-term meditation induces white matter changes in the anterior cingulate - PNAS - 3 views

  •  
    one more try to get you interested in this ... seems that it is slowly but surely moving into the domain of serious science ...
  •  
    Why don't you try this out? 10 minutes group meditation before every ACT meeting... Should be fun!
  •  
    Great, just great!! The conclusion seems to be "Thus IBMT could provide a means for improving self-regulation and perhaps reducing or preventing various mental disorders." Why all this neuro-bio-nonsense?? Wasn't this conclusion known before just using good old classic psychology and similar? Again one of these studies that thinks to provide new evidence just because they made a boring brain scan...
Juxi Leitner

Pentagon's Shape-Shifting Bot Folds Into Boat, Plane | Danger Room | Wired.com - 0 views

  • Darpa-backed electrical engineers at the two schools released the stunning results: a shape-shifting sheet of rigid tiles and elastomer joints that can fold itself into a little plane or a boat on demand.
  • In Darpa’s dreams, this work will eventually lead to everything from morphing aircraft to self-styling uniforms to a “universal spare part.”
  •  
    haha! is this a joke...?
  •  
    well i guess the news headline is a bit too much trying to be attractive :)
Ma Ru

Robots on TV: AI goes back to baby basics - 0 views

  •  
    A bit of self-ad here :-) Hear my lab colleague Tony Morse speaking about developmental robotics, and meet our little iCub... As a bonus, have a peep into the kitchen and messy lab of the guys downstairs... my office is of course much nicer!!!
  •  
    "and yet one (idea) that allowed to make new predictions that are now being tested in children"!!! Which one? I am curious.
  •  
    Will ask him today...
Luís F. Simões

Inferring individual rules from collective behavior - 2 views

  •  
    "We fit data to zonal interaction models and characterize which individual interaction forces suffice to explain observed spatial patterns." You can get the paper from the first author's website: http://people.stfx.ca/rlukeman/research.htm
  •  
    PNAS? Didnt strike me as sth very new though... We should refer to it in the roots study though: "Social organisms form striking aggregation patterns, displaying cohesion, polarization, and collective intelligence. Determining how they do so in nature is challenging; a plethora of simulation studies displaying life-like swarm behavior lack rigorous comparison with actual data because collecting field data of sufficient quality has been a bottleneck." For roots it is NO bottleneck :) Tobias was right :)
  •  
    Here they assume all relevant variables influencing behaviour are being observed. Namely, the relative positions and orientations of all ducks in the swarm. So, they make movies of the swarm's movements, process them, and them fit the models to that data. In the roots, though we can observe the complete final structure, or even obtain time-lapse movies showing how that structure came out to be, getting the measurements of all relevant soil variables (nitrogen, phosphorus, ...) throughout the soil, and over time, would be extremely difficult. So I guess a replication of the kind of work they did, but for the roots, would be hard. Nice reference though.
Joris _

Swarming spacecraft to self-destruct for greater good - space - 06 September 2010 - New Scientist - 0 views

  • However, should one spacecraft in such a swarm begin to fail and risk a calamitous collision with another, it must sense its end is nigh and put itself on a course that takes it forever away from the swarm – for the greater good of the collective.
Joris _

The seeds of disruptive innovation within the European Space Agency - 24 August 2010 > Advanced Space Concepts Laboratory > On Demand Seminar - 5 views

  •  
    :p
  • ...1 more comment...
  •  
    haha :) well.. don't shoot me Dario. I wasn't involved in this disclosure. But now that the link is public, you might all want to consider subscribing to their feed: http://ewds.strath.ac.uk/space/Podcasts.aspx They have some nice talks there. One of them is by Ken McLeod, the science fiction writer. Is anyone else with me on the idea that we should also invite science fiction writers for science coffees? :)
  •  
    So nice to hear Dario again! :-) But apparently UoS needs someone a bit more skilled to handle these videos...
  •  
    Only one self-comment alla Barney ..... suit-up
pacome delva

A Phase Transition for Light | Physical Review Focus - 3 views

  • A computer simulation shows the transition from "fermionic" to "liquid" light.
  • The possibility of sending this type of "self-focused" light pulse long distances could be important for remote sensing applications, such as LIDAR, which uses laser light the way radar uses radio waves.
  •  
    what the heck is this?? sounds really strange but highly interesting to me ....
  •  
    can we use this for energy transmission?
  •  
    read it now more carefully and the answer is probably no ...
Dario Izzo

Probabilistic Logic Allows Computer Chip to Run Faster - 3 views

  •  
    Francesco pointed out this research one year ago, we dropped it as noone was really considering it ... but in space a low CPU power consumption is crucial!! Maybe we should look back into this?
  • ...6 more comments...
  •  
    Q1: For the time being, for what purposes computers are mainly used on-board?
  •  
    for navigation, control, data handling and so on .... why?
  •  
    Well, because the point is to identify an application in which such computers would do the job... That could be either an existing application which can be done sufficiently well by such computers or a completely new application which is not already there for instance because of some power consumption constraints... Q2 would be then: for which of these purposes strict determinism of the results is not crucial? As the answer to this may not be obvious, a potential study could address this very issue. For instance one can consider on-board navigation systems with limited accuracy... I may be talking bullshit now, but perhaps in some applications it doesn't matter whether a satellite flies on the exact route but +/-10km to the left/right? ...and so on for the other systems. Another thing is understanding what exactly this probabilistic computing is, and what can be achieved using it (like the result is probabilistic but falls within a defined range of precision), etc. Did they build a complete chip or at least a sub-circiut, or still only logic gates...
  •  
    Satellites use old CPUs also because with the trend of going for higher power modern CPUs are not very convenient from a system design point of view (TBC)... as a consequence the constraints put on on-board algorithms can be demanding. I agree with you that double precision might just not be necessary for a number of applications (navigation also), but I guess we are not talking about 10km as an absolute value, rather to a relative error that can be tolerated at level of (say) 10^-6. All in all you are right a first study should assess what application this would be useful at all.. and at what precision / power levels
  •  
    The interest of this can be a high fault tolerance for some math operations, ... which would have for effect to simplify the job of coders! I don't think this is a good idea regarding power consumption for CPU (strictly speaking). The reason we use old chip is just a matter of qualification for space, not power. For instance a LEON Sparc (e.g. use on some platform for ESA) consumes something like 5mW/MHz so it is definitely not were an engineer will look for some power saving considering a usual 10-15kW spacecraft
  •  
    What about speed then? Seven time faster could allow some real time navigation at higher speed (e.g. velocity of a terminal guidance for an asteroid impactor is limited to 10 km/s ... would a higher velocity be possible with faster processors?) Another issue is the radiation tolerance of the technology ... if the PCMOS are more tolerant to radiation they could get more easily space qualified.....
  •  
    I don't remember what is the speed factor, but I guess this might do it! Although, I remember when using an IMU that you cannot have the data above a given rate (e.g. 20Hz even though the ADC samples the sensor at a little faster rate), so somehow it is not just the CPU that must be re-thought. When I say qualification I also imply the "hardened" phase.
  •  
    I don't know if the (promised) one-order-of-magnitude improvements in power efficiency and performance are enough to justify looking into this. For once, it is not clear to me what embracing this technology would mean from an engineering point of view: does this technology need an entirely new software/hardware stack? If that were the case, in my opinion any potential benefit would be nullified. Also, is it realistic to build an entire self-sufficient chip on this technology? While the precision of floating point computations may be degraded and still be useful, how does all this play with integer arithmetic? Keep in mind that, e.g., in the Linux kernel code floating-point calculations are not even allowed/available... It is probably possible to integrate an "accelerated" low-accuracy floating-point unit together with a traditional CPU, but then again you have more implementation overhead creeping in. Finally, recent processors by Intel (e.g., the Atom) and especially ARM boast really low power-consumption levels, at the same time offering performance-boosting features such as multi-core and vectorization capabilities. Don't such efforts have more potential, if anything because of economical/industrial inertia?
Francesco Biscani

The End Of Gravity As a Fundamental Force - 6 views

  •  
    "At a symposium at the Dutch Spinoza-instituut on 8 December, 2009, string theorist Erik Verlinde introduced a theory that derives Newton's classical mechanics. In his theory, gravity exists because of a difference in concentration of information in the empty space between two masses and its surroundings. He does not consider gravity as fundamental, but as an emergent phenomenon that arises from a deeper microscropic reality. A relativistic extension of his argument leads directly to Einstein's equations."
  • ...8 more comments...
  •  
    Diffcult for me to fully understand / believe in the holographic principle at macroscopical scales ... potentially it looks though as a revolutionary idea.....
  •  
    never heard about it... seems interesting. At first sight it seems that it is based on fundamental principle that could lead to a new phenomenology, so that could be tested. Perhaps Luzi knows more about this ? Did we ever work on this concept ?
  •  
    The paper is quite long and I don't have the time right now to read it in detail. Just a few comments: * We (ACT) definitely never did anything in this direction? But: is there a new phenomenology? I'm not sure, if the aim is just to get Einstein's theory as emergent theory, then GR should not change (or only change in extreme conditions.) * Emergent gravity is not new, also Erik admits that. The claim to have found a solution appears quite frequently, but most proposals actually are not emergent at all. At least, I have the impression that Erik is aware of the relevant steps to be performed. * It's very difficult to judge from a short glance at the paper, up to which point the claims are serious and where it just starts to be advertisments. Section 6 is pretty much a collection of self-praise. * Most importantly: I don't understand how exactly space and time should be emergent. I think it's not new to observe that space is related to special canonical variables in thermodynamics. If anybody can see anything "emergent" in the first paragraphs of section 3, then please explain me. For me, this is not emergent space, but space introduced with a "sledge hammer." Time anyway seems to be a precondition, else there is nothing like energy and nothing like dynamics. * Finally, holography appears to be a precondition, to my knowledge no proof exists that normal (non-supersymmetric, non-stringy, non-whatever) GR has a holographic dual.
  •  
    Update: meanwhile I understood roughly what this should be about. It's well known that BH physics follow the laws of theormodynamics, suggesting the existence of underlying microstates. But if this is true, shouldn't the gravitational force then be emergent from these microstates in the same way as any theromdynamical effect is emergent from the behavior of its constituents (e.g. a gas)? If this can be prooven, then indeed gravity is emergent. Problem: one has to proof that *any* configuration in GR may be interpreted as thermodynamical, not just BHs. That's probably where holography comes into the play. To me this smells pretty much like N=4 SYM vs. QCD. The former is not QCD, but can be solved, so all stringy people study just that one and claim to learn something about QCD. Here, we look at holographic models, GR is not holographic, but who cares... Engineering problems...
  •  
    is there any experimental or observational evidence that points to this "solution"?
  •  
    Are you joking??? :D
  •  
    I was a bit fast to say it could be tested... apparently we don't even know a theory that is holographic, perhaps a string theory (see http://arxiv.org/abs/hep-th/9409089v2). So very far from any test...
  •  
    Luzi, I miss you!!!
  •  
    Leo, do you mean you liked my comment on your question more than Pacome's? Well, the ACT has to evolve and fledge, so no bullshitting anymore, but serious and calculating answers... :-) Sorry Pacome, nothing against you!! I just LOVE this Diigo because it gives me the opportunity for a happy revival of my ACT mood.
  •  
    haha, today would have been great to show your mood... we had a talk on the connection between mind and matter !!
LeopoldS

Self-organized adaptation of a simple neural circuit enables complex robot behaviour : Abstract : Nature Physics - 3 views

  •  
    is this really worth a nature paper??
  •  
    Funny to read this question exactly from you, the all and ever fan of anything linked to bio :-) I have read worse papers in nature and in addition it's just "Nature physics", viz. "Nature garbage." Could be that they don't find enough really good stuff to publish in all their topical clones of Nature.
  •  
    francesco already posted this below
Tobias Seidl

Self-assembled artificial cilia - PNAS - 1 views

  •  
    Cilia are hairs driven by molecular motors. They are found in monocellular organisms, etc. If we can build such things artificially, we have micro-pumps etc. Any space usability?
  •  
    carlo's distributed actuator study originally considered cilia as well as peristaltic motion if i remember right. i suppose you might still think about debris transport for digging applications. Originally there was an idea for thermal transport aswell which, it turns out, was bollocks.
Luzi Bergamin

[0810.3179] The Enlightened Game of Life - 3 views

  •  
    Revised version of a 2008 paper. Pretty crazy title and perhaps crazy content...
  •  
    the abstract sounds like a random generated paper...
pacome delva

New Intel Sensor Could Cut Electricity Bill - 3 views

  • Once connected, the sensor will wirelessly connect to all electrical devices in the house and self configure to record the voltages from each source in real time.
  •  
    "The first thing everyone did after seeing the energy graph on the family PC was to turn off the lights". Kind-of we are becoming slaves of the technology. Do we really need a sensor to tell us to turn-off the light when we are leaving the room!?
« First ‹ Previous 81 - 100 of 122 Next › Last »
Showing 20 items per page