Skip to main content

Home/ Advanced Concepts Team/ Group items tagged alternative

Rss Feed Group items tagged

Thijs Versloot

Alternative sleep cycles - 1 views

  •  
    Give the Ubermancycle a try?
  • ...4 more comments...
  •  
    I was into this some time ago and found a documentary in which they performed an experiment on a guy. Long story short, it didn't work that good. He was semi-lucid all the time and his mental performance dropped. Perhaps it is possible to survive like this for months, but if your goal is to maximize your daily output, you will not gain extra work hours due to being 3/4 conscious most of the time. EDIT: Not related to the documentary I mentioned but some first hand stories: http://www.reddit.com/r/IAmA/comments/co5t9/i_attempted_polyphasic_sleep_for_a_documentary_ama/c0tza1e
  •  
    I also heard about it. At the moment, I am on some sort of bi-phasic sleep and I am not feeling more tired than with the monophasic one (while sleeping effectively less right now).
  •  
    If it exists, there's an xkcd about it: http://xkcd.com/320/ Actually the schedule proposed there is quite useful if you're into this whole Friday / Saturday night thing..
  •  
    I don't see why it wouldn't work if you manage to detach yourself from the cycardian input. As in never ever see sun and daylight :))
  •  
    > As in never ever see sun and daylight :)) Like in the Netherlands you mean?
  •  
    Tri-phasic sleep rhythm works fine.
Marcus Maertens

Ultrahigh Acceleration Neutral Particle Beam-Driven Sails - 1 views

  •  
    An alternative to photon-beam driven sails?
johannessimon81

18-year-old massively improves supercapacitors during Intel International Science and E... - 1 views

  •  
    "Her goal was to design and synthesise a super capacitor with increased energy density while maintaining power density and long cycle life. She designed, synthesised and characterised a novel core-shell nanorod electrode with hydrogemated TiO2(H-TiO2) core and polyaniline shell. H-TiO2 acts as the double layer electrostatic core. Good conductivity of H-TiO2 combined with the high pseudo capacitance of polyaniline results in significantly higher overall capacitance and energy density while retaining good power density and cycle life. This new electrode was fabricated into a flexible solid-state device to light an LED to test it in a practical application. Khare then evaluated the structural and electrochemical properties of the new electrode. It demonstrated high capacitance of 203.3 mF/cm2 (238.5 F/g) compared to the next best alternative super capacitor in previous research of 80 F/g, due to the design of the core-shell structure. This resulted in excellent energy density of 20.1 Wh/kg, comparable to batteries, while maintaining a high power density of 20540 W/kg. It also demonstrated a much higher cycle life compared to batteries, with a low 32.5% capacitance loss over 10,000 cycles at a high scan rate of 200 mV/s."
santecarloni

The Higgs, Boltzmann Brains, and Monkeys Typing Hamlet | The Crux | Discover Magazine - 7 views

  •  
    good luck with this....
  • ...3 more comments...
  •  
    Nice article, actually! It summarizes in "human readable format" why and how too many cosmologists and string theorists just went bozo...
  •  
    really ! this article should go for the ignobels ! http://arxiv.org/abs/0808.3778 I wonder which substance theorists are taking... I will avoid...! but really this is very preoccupating: "complex structures will occasionally emerge from the vacuum as quantum fluctuations, at a small but nonzero rate per unit spacetime volume. An intelligent observer, like a human, could be one such structure." Is this a new alternative to Darwinism...??? a support to creationism ?? How can a physicist can write such non-sense ?
  •  
    and this is published in PRD !!!
  •  
    In 1996 Sokal hoaxed sociologists with his famous nonsense text on political implications of quantum gravity. Can one play a similar game with "researchers" on Boltzmann brains, multiverses, string landscapes or similar? I doubt, this is just reality satire that can't be topped.
  •  
    Poor Boltzmann ...
Tobias Seidl

Frontiers | Alternatives to Peer Review: Novel Approaches for Research Evaluation | Fro... - 6 views

  •  
    Some new field of game for the ACT?
  •  
    Very interesting paper!
johannessimon81

Iridium to introduce WiFi hotspots for global satelite internet - 2 views

  •  
    In remote / under-developed regions this might actually be a strong alternative for building internet connectivity - somewhat like the exploding market for cell phones in Africa due to the lack of land lines.
Guido de Croon

Will robots be smarter than humans by 2029? - 2 views

  •  
    Nice discussion about the singularity. Made me think of drinking coffee with Luis... It raises some issues such as the necessity of embodiment, etc.
  • ...9 more comments...
  •  
    "Kurzweilians"... LOL. Still not sold on embodiment, btw.
  •  
    The biggest problem with embodiment is that, since the passive walkers (with which it all started), it hasn't delivered anything really interesting...
  •  
    The problem with embodiment is that it's done wrong. Embodiment needs to be treated like big data. More sensors, more data, more processing. Just putting a computer in a robot with a camera and microphone is not embodiment.
  •  
    I like how he attacks Moore's Law. It always looks a bit naive to me if people start to (ab)use it to make their point. No strong opinion about embodiment.
  •  
    @Paul: How would embodiment be done RIGHT?
  •  
    Embodiment has some obvious advantages. For example, in the vision domain many hard problems become easy when you have a body with which you can take actions (like looking at an object you don't immediately recognize from a different angle) - a point already made by researchers such as Aloimonos.and Ballard in the end 80s / beginning 90s. However, embodiment goes further than gathering information and "mental" recognition. In this respect, the evolutionary robotics work by for example Beer is interesting, where an agent discriminates between diamonds and circles by avoiding one and catching the other, without there being a clear "moment" in which the recognition takes place. "Recognition" is a behavioral property there, for which embodiment is obviously important. With embodiment the effort for recognizing an object behaviorally can be divided between the brain and the body, resulting in less computation for the brain. Also the article "Behavioural Categorisation: Behaviour makes up for bad vision" is interesting in this respect. In the field of embodied cognitive science, some say that recognition is constituted by the activation of sensorimotor correlations. I wonder to which extent this is true, and if it is valid for extremely simple creatures to more advanced ones, but it is an interesting idea nonetheless. This being said, if "embodiment" implies having a physical body, then I would argue that it is not a necessary requirement for intelligence. "Situatedness", being able to take (virtual or real) "actions" that influence the "inputs", may be.
  •  
    @Paul While I completely agree about the "embodiment done wrong" (or at least "not exactly correct") part, what you say goes exactly against one of the major claims which are connected with the notion of embodiment (google for "representational bottleneck"). The fact is your brain does *not* have resources to deal with big data. The idea therefore is that it is the body what helps to deal with what to a computer scientist appears like "big data". Understanding how this happens is key. Whether it is the problem of scale or of actually understanding what happens should be quite conclusively shown by the outcomes of the Blue Brain project.
  •  
    Wouldn't one expect that to produce consciousness (even in a lower form) an approach resembling that of nature would be essential? All animals grow from a very simple initial state (just a few cells) and have only a very limited number of sensors AND processing units. This would allow for a fairly simple way to create simple neural networks and to start up stable neural excitation patterns. Over time as complexity of the body (sensors, processors, actuators) increases the system should be able to adapt in a continuous manner and increase its degree of self-awareness and consciousness. On the other hand, building a simulated brain that resembles (parts of) the human one in its final state seems to me like taking a person who is just dead and trying to restart the brain by means of electric shocks.
  •  
    Actually on a neuronal level all information gets processed. Not all of it makes it into "conscious" processing or attention. Whatever makes it into conscious processing is a highly reduced representation of the data you get. However that doesn't get lost. Basic, low processed data forms the basis of proprioception and reflexes. Every step you take is a macro command your brain issues to the intricate sensory-motor system that puts your legs in motion by actuating every muscle and correcting every step deviation from its desired trajectory using the complicated system of nerve endings and motor commands. Reflexes which were build over the years, as those massive amounts of data slowly get integrated into the nervous system and the the incipient parts of the brain. But without all those sensors scattered throughout the body, all the little inputs in massive amounts that slowly get filtered through, you would not be able to experience your body, and experience the world. Every concept that you conjure up from your mind is a sort of loose association of your sensorimotor input. How can a robot understand the concept of a strawberry if all it can perceive of it is its shape and color and maybe the sound that it makes as it gets squished? How can you understand the "abstract" notion of strawberry without the incredibly sensible tactile feel, without the act of ripping off the stem, without the motor action of taking it to our mouths, without its texture and taste? When we as humans summon the strawberry thought, all of these concepts and ideas converge (distributed throughout the neurons in our minds) to form this abstract concept formed out of all of these many many correlations. A robot with no touch, no taste, no delicate articulate motions, no "serious" way to interact with and perceive its environment, no massive flow of information from which to chose and and reduce, will never attain human level intelligence. That's point 1. Point 2 is that mere pattern recogn
  •  
    All information *that gets processed* gets processed but now we arrived at a tautology. The whole problem is ultimately nobody knows what gets processed (not to mention how). In fact an absolute statement "all information" gets processed is very easy to dismiss because the characteristics of our sensors are such that a lot of information is filtered out already at the input level (e.g. eyes). I'm not saying it's not a valid and even interesting assumption, but it's still just an assumption and the next step is to explore scientifically where it leads you. And until you show its superiority experimentally it's as good as all other alternative assumptions you can make. I only wanted to point out is that "more processing" is not exactly compatible with some of the fundamental assumptions of the embodiment. I recommend Wilson, 2002 as a crash course.
  •  
    These deal with different things in human intelligence. One is the depth of the intelligence (how much of the bigger picture can you see, how abstract can you form concept and ideas), another is the breadth of the intelligence (how well can you actually generalize, how encompassing those concepts are and what is the level of detail in which you perceive all the information you have) and another is the relevance of the information (this is where the embodiment comes in. What you do is to a purpose, tied into the environment and ultimately linked to survival). As far as I see it, these form the pillars of human intelligence, and of the intelligence of biological beings. They are quite contradictory to each other mainly due to physical constraints (such as for example energy usage, and training time). "More processing" is not exactly compatible with some aspects of embodiment, but it is important for human level intelligence. Embodiment is necessary for establishing an environmental context of actions, a constraint space if you will, failure of human minds (i.e. schizophrenia) is ultimately a failure of perceived embodiment. What we do know is that we perform a lot of compression and a lot of integration on a lot of data in an environmental coupling. Imo, take any of these parts out, and you cannot attain human+ intelligence. Vary the quantities and you'll obtain different manifestations of intelligence, from cockroach to cat to google to random quake bot. Increase them all beyond human levels and you're on your way towards the singularity.
johannessimon81

High efficiency solid state heat engine - 0 views

  •  
    We discussed this today during coffee. The inventor claims that he claims that a pressure differential can push hydrogen through a proton conductive membrane (thereby stripping off the electrons) which flow through an electric circuit and provide electric power. The type of membrane is fairly similar to that found in a hydrogen fuel cell. If the pressure differential is cause by selective heating this is in essence a heat engine that directly produces electricity. The inventor claims that this could be a high efficiency alternative to thermoelectric devices and could even outperform PV and Sterling engines with an efficiency close to that of fuel cells (e.g., ~60% @ dT=600K). I could not find any scientific publications as the inventor is not affiliated to any University - he has however an impressive number of patents from a very wide field (e.g., the "Super Soaker" squirt gun) and has worked on several NASA and US military projects. His current research seams to be funded by the latter as well. Here are some more links that I found: http://www.theatlantic.com/magazine/archive/2010/11/shooting-for-the-sun/308268/ http://www.johnsonems.com/?q=node/13 http://scholar.google.nl/scholar?q=%22lonnie+g+johnson%22+&btnG=&hl=nl&as_sdt=0%2C5
tvinko

Massively collaborative mathematics : Article : Nature - 28 views

  •  
    peer-to-peer theorem-proving
  • ...14 more comments...
  •  
    Or: mathematicians catch up with open-source software developers :)
  •  
    "Similar open-source techniques could be applied in fields such as [...] computer science, where the raw materials are informational and can be freely shared online." ... or we could reach the point, unthinkable only few years ago, of being able to exchange text messages in almost real time! OMG, think of the possibilities! Seriously, does the author even browse the internet?
  •  
    I do not agree with you F., you are citing out of context! Sharing messages does not make a collaboration, nor does a forum, .... You need a set of rules and a common objective. This is clearly observable in "some team", where these rules are lacking, making team work inexistent. The additional difficulties here are that it involves people that are almost strangers to each other, and the immateriality of the project. The support they are using (web, wiki) is only secondary. What they achieved is remarkable, disregarding the subject!
  •  
    I think we will just have to agree to disagree then :) Open source developers have been organizing themselves with emails since the early '90s, and most projects (e.g., the Linux kernel) still do not use anything else today. The Linux kernel mailing list gets around 400 messages per day, and they are managing just fine to scale as the number of contributors increases. I agree that what they achieved is remarkable, but it is more for "what" they achieved than "how". What they did does not remotely qualify as "massively" collaborative: again, many open source projects are managed collaboratively by thousands of people, and many of them are in the multi-million lines of code range. My personal opinion of why in the scientific world these open models are having so many difficulties is that the scientific community today is (globally, of course there are many exceptions) a closed, mostly conservative circle of people who are scared of changes. There is also the fact that the barrier of entry in a scientific community is very high, but I think that this should merely scale down the number of people involved and not change the community "qualitatively". I do not think that many research activities are so much more difficult than, e.g., writing an O(1) scheduler for an Operating System or writing a new balancing tree algorithm for efficiently storing files on a filesystem. Then there is the whole issue of scientific publishing, which, in its current form, is nothing more than a racket. No wonder traditional journals are scared to death by these open-science movements.
  •  
    here we go ... nice controversy! but maybe too many things mixed up together - open science journals vs traditional journals, conservatism of science community wrt programmers (to me one of the reasons for this might be the average age of both groups, which is probably more than 10 years apart ...) and then using emailing wrt other collaboration tools .... .... will have to look at the paper now more carefully ... (I am surprised to see no comment from José or Marek here :-)
  •  
    My point about your initial comment is that it is simplistic to infer that emails imply collaborative work. You actually use the word "organize", what does it mean indeed. In the case of Linux, what makes the project work is the rules they set and the management style (hierachy, meritocracy, review). Mailing is just a coordination mean. In collaborations and team work, it is about rules, not only about the technology you use to potentially collaborate. Otherwise, all projects would be successful, and we would noy learn management at school! They did not write they managed the colloboration exclusively because of wikipedia and emails (or other 2.0 technology)! You are missing the part that makes it successful and remarkable as a project. On his blog the guy put a list of 12 rules for this project. None are related to emails, wikipedia, forums ... because that would be lame and your comment would make sense. Following your argumentation, the tools would be sufficient for collaboration. In the ACT, we have plenty of tools, but no team work. QED
  •  
    the question on the ACT team work is one that is coming back continuously and it always so far has boiled down to the question of how much there need and should be a team project to which everybody inthe team contributes in his / her way or how much we should leave smaller, flexible teams within the team form and progress, more following a bottom-up initiative than imposing one from top-down. At this very moment, there are at least 4 to 5 teams with their own tools and mechanisms which are active and operating within the team. - but hey, if there is a real will for one larger project of the team to which all or most members want to contribute, lets go for it .... but in my view, it should be on a convince rather than oblige basis ...
  •  
    It is, though, indicative that some of the team member do not see all the collaboration and team work happening around them. We always leave the small and agile sub-teams to form and organize themselves spontaneously, but clearly this method leaves out some people (be it for their own personal attitude or be it for pure chance) For those cases which we could think to provide the possibility to participate in an alternative, more structured, team work where we actually manage the hierachy, meritocracy and perform the project review (to use Joris words).
  •  
    I am, and was, involved in "collaboration" but I can say from experience that we are mostly a sum of individuals. In the end, it is always one or two individuals doing the job, and other waiting. Sometimes even, some people don't do what they are supposed to do, so nothing happens ... this could not be defined as team work. Don't get me wrong, this is the dynamic of the team and I am OK with it ... in the end it is less work for me :) team = 3 members or more. I am personally not looking for a 15 member team work, and it is not what I meant. Anyway, this is not exactly the subject of the paper.
  •  
    My opinion about this is that a research team, like the ACT, is a group of _people_ and not only brains. What I mean is that people have feelings, hate, anger, envy, sympathy, love, etc about the others. Unfortunately(?), this could lead to situations, where, in theory, a group of brains could work together, but not the same group of people. As far as I am concerned, this happened many times during my ACT period. And this is happening now with me in Delft, where I have the chance to be in an even more international group than the ACT. I do efficient collaborations with those people who are "close" to me not only in scientific interest, but also in some private sense. And I have people around me who have interesting topics and they might need my help and knowledge, but somehow, it just does not work. Simply lack of sympathy. You know what I mean, don't you? About the article: there is nothing new, indeed. However, why it worked: only brains and not the people worked together on a very specific problem. Plus maybe they were motivated by the idea of e-collaboration. No revolution.
  •  
    Joris, maybe I made myself not clear enough, but my point was only tangentially related to the tools. Indeed, it is the original article mention of "development of new online tools" which prompted my reply about emails. Let me try to say it more clearly: my point is that what they accomplished is nothing new methodologically (i.e., online collaboration of a loosely knit group of people), it is something that has been done countless times before. Do you think that now that it is mathematicians who are doing it makes it somehow special or different? Personally, I don't. You should come over to some mailing lists of mathematical open-source software (e.g., SAGE, Pari, ...), there's plenty of online collaborative research going on there :) I also disagree that, as you say, "in the case of Linux, what makes the project work is the rules they set and the management style (hierachy, meritocracy, review)". First of all I think the main engine of any collaboration like this is the objective, i.e., wanting to get something done. Rules emerge from self-organization later on, and they may be completely different from project to project, ranging from almost anarchy to BDFL (benevolent dictator for life) style. Given this kind of variety that can be observed in open-source projects today, I am very skeptical that any kind of management rule can be said to be universal (and I am pretty sure that the overwhelming majority of project organizers never went to any "management school"). Then there is the social aspect that Tamas mentions above. From my personal experience, communities that put technical merit above everything else tend to remain very small and generally become irrelevant. The ability to work and collaborate with others is the main asset the a participant of a community can bring. I've seen many times on the Linux kernel mailing list contributions deemed "technically superior" being disregarded and not considered for inclusion in the kernel because it was clear that
  •  
    hey, just catched up the discussion. For me what is very new is mainly the framework where this collaborative (open) work is applied. I haven't seen this kind of working openly in any other field of academic research (except for the Boinc type project which are very different, because relying on non specialists for the work to be done). This raise several problems, and mainly the one of the credit, which has not really been solved as I read in the wiki (is an article is written, who writes it, what are the names on the paper). They chose to refer to the project, and not to the individual researchers, as a temporary solution... It is not so surprising for me that this type of work has been first done in the domain of mathematics. Perhaps I have an ideal view of this community but it seems that the result obtained is more important than who obtained it... In many areas of research this is not the case, and one reason is how the research is financed. To obtain money you need to have (scientific) credit, and to have credit you need to have papers with your name on it... so this model of research does not fit in my opinion with the way research is governed. Anyway we had a discussion on the Ariadnet on how to use it, and one idea was to do this kind of collaborative research; idea that was quickly abandoned...
  •  
    I don't really see much the problem with giving credit. It is not the first time a group of researchers collectively take credit for a result under a group umbrella, e.g., see Nicolas Bourbaki: http://en.wikipedia.org/wiki/Bourbaki Again, if the research process is completely transparent and publicly accessible there's no way to fake contributions or to give undue credit, and one could cite without problems a group paper in his/her CV, research grant application, etc.
  •  
    Well my point was more that it could be a problem with how the actual system works. Let say you want a grant or a position, then the jury will count the number of papers with you as a first author, and the other papers (at least in France)... and look at the impact factor of these journals. Then you would have to set up a rule for classifying the authors (endless and pointless discussions), and give an impact factor to the group...?
  •  
    it seems that i should visit you guys at estec... :-)
  •  
    urgently!! btw: we will have the ACT christmas dinner on the 9th in the evening ... are you coming?
Joris _

Obama's dream of Mars at risk from radiation - physicsworld.com - 0 views

  • Schwabe cycle
  • Schwabe cycle, where sunspot numbers reach a peak roughly once every 11 years
  • the intensity of each solar maximum is also thought to oscillate over a period, called the Gleissberg cycle
  • ...3 more annotations...
  • The worse-case scenario is that if you radiate a crew sufficiently, they'd all succumb to radiation sickness within a few days and essentially vomit and diarrhoea themselves to death within an enclosed capsule
  • The Moon missions were just blind lucky,” explains Lewis Dartnell, an astrobiologist at University College, London. “The astronauts would have experienced radiation sickness and a higher risk of future cancer if they'd been hit,” he adds.
  • Hapgood and colleagues are currently working on an alternative technique that involves surrounding the spacecraft with a plasma shield to deflect incoming protons without creating secondary radiation
Dario Izzo

Electronic Nose - 3 views

  •  
    NASA worked on this 4 years ago.... good inspiration for a possible alternative application to navigation
Joris _

DoD Buzz | Back Away from GPS: AF Chief - 4 views

  • develop alternatives to GPS
  • The fact that the U.S., which invented GPS and most of what depends on it (ATMs, gas pumps, trucking companies and lost spouses), would consider stepping away from the system marks a cultural and technological milestone
  • recommended that the U.S. scrap building five more GPS satellites and engage European allies on sharing their proposed Galileo global navigation satellite system
nikolas smyrlakis

mentored by the Advanced Concepts Team for Google Summer of Code 2010 - 4 views

  •  
    you propably already know,I post it for the twitter account and for your comments
  • ...4 more comments...
  •  
    once again one of these initiatives that came up from a situation and that would never have been possible with a top-down approach .... fantastic! and as Dario said: we are apparently where NASA still has to go with this :-)
  •  
    Actually, NASA Ames did that already within the NASA Open Source Agreement in 2008 for a V&V software!
  •  
    indeed ... you are right .... interesting project btw - they started in 1999, were in 2005 the first NASA project on Sourceforge and won several awards .... then this entry why they did not participate last year: "05/01/09: Skipping this years Google Summer-of-Code - many of you have asked why we are not participating in this years Summer of Code. The answer is that both John and Peter were too busy with other assignments to set this up in time. We will be back in 2010. At least we were able to compensate with a limited number of NASA internships to continue some of last years projects." .... but I could not find them in this years selected list - any clue?
  •  
    but in any case, according to the apple guru, Java is a dying technology, so their project might as well ...
  •  
    They participate under the name "The Java Pathfinder Team" (http://babelfish.arc.nasa.gov/trac/jpf/wiki/events/soc2010). It is actually a very useful project for both education and industry (Airbus created a consortium on model checking soft, and there is a lot of research on it) As far as I know, TAS had some plans of using Java onboard spacecrafts, 2 years ago. Not sure the industry is really sensible about Jobs' opinions ;) particularly if there is no better alternative!
Luís F. Simões

The Fallout of a Helium-3 Crisis : Discovery News - 3 views

  • So short in fact, that last year when the looming crisis, which reporters had been covering for years, became official, the price of helium-3 went from $150 per liter to $5,000 per liter.
  • The science, medical and security uses for helium-3 are so diverse that the crisis banded together a hodge-podge of universities, hospitals and government departments to try and find workable alternatives and engineer ways to recycle the gas they do have.
  •  
    So, which shall it be? Are we going to increase the production of hydrogen bombs, or can we finally go back to the Moon (http://en.wikipedia.org/wiki/Helium-3#Extraterrestrial_supplies) ?
  •  
    None of these. Either you recycle, or you take it from natural sources on Earth. Although most people don't know - there is plenty of natural He3 on Earth. It's just nonsens to use it for energy production (in fusion reactors) since the energy belance for getting the He3 from these source on Earth is just negative. Or you try to substitute He3.
Luís F. Simões

Shell energy scenarios to 2050 - 6 views

  •  
    just in case you were feeling happy and optimistic
  • ...7 more comments...
  •  
    An energy scenario published by an oil company? Allow me to be sceptical...
  •  
    Indeed, Shell is an energy company, not just oil, for some time now ... The two scenarii are, in their approach, dependant of economic and political situation, which is right now impossible to forecast. Reference to Kyoto is surprising, almost out-dated! But overall, I find it rather optimistic at some stages, and probably the timeline (p37-39) is unlikely with recent events.
  •  
    the report was published in 2008, which explains the reference to Kyoto, as the follow-up to it was much more uncertain at that point. The Blueprint scenario is indeed optimistic, but also quite unlikely I'd say. I don't see humanity suddenly becoming so wise and coordinated. Sadly, I see something closer to the Scramble scenario as much more likely to occur.
  •  
    not an oil company??? please have a look at the percentage of their revenues coming from oil and gas and then compare this with all their other energy activities together and you will see very quickly that it is only window dressing ... they are an oil and gas company ... and nothing more
  •  
    not JUST oil. From a description: "Shell is a global group of energy and petrochemical companies." Of course revenues coming from oil are the biggest, the investment turnover on other energy sources is small for now. Knowing that most of their revenues is from an expendable source, to guarantee their future, they invest elsewhere. They have invested >1b$ in renewable energy, including biofuels. They had the largest wind power business among so-called "oil" companies. Oil only defines what they do "best". As a comparison, some time ago, Apple were selling only computers and now they sell phones. But I would not say Apple is just a phone company.
  •  
    window dressing only ... e.g. Net cash from operating activities (pre-tax) in 2008: 70 Billion$ net income in 2008: 26 Billion revenues in 2008: 88 Billion Their investments and revenues in renewables don't even show up in their annual financial reports since probably they are under the heading of "marketing" which is already 1.7 Billion $ ... this is what they report on their investments: Capital investment, portfolio actions and business development Capital investment in 2009 was $24 billion. This represents a 26% decrease from 2008, which included over $8 billion in acquisitions, primarily relating to Duvernay Oil Corp. Capital investment included exploration expenditure of $4.5 billion (2008: $11.0 billion). In Abu Dhabi, Shell signed an agreement with Abu Dhabi National Oil Company to extend the GASCO joint venture for a further 20 years. In Australia, Shell and its partners took the final investment decision (FID) for the Gorgon LNG project (Shell share 25%). Gorgon will supply global gas markets to at least 2050, with a capacity of 15 million tonnes (100% basis) of LNG per year and a major carbon capture and storage scheme. Shell has announced a front-end engineering and design study for a floating LNG (FLNG) project, with the potential to deploy these facilities at the Prelude offshore gas discovery in Australia (Shell share 100%). In Australia, Shell confirmed that it has accepted Woodside Petroleum Ltd.'s entitlement offer of new shares at a total cost of $0.8 billion, maintaining its 34.27% share in the company; $0.4 billion was paid in 2009 with the remainder paid in 2010. In Bolivia and Brazil, Shell sold its share in a gas pipeline and in a thermoelectric power plant and its related assets for a total of around $100 million. In Canada, the Government of Alberta and the national government jointly announced their intent to contribute $0.8 billion of funding towards the Quest carbon capture and sequestration project. Quest, which is at the f
  •  
    thanks for the info :) They still have their 50% share in the wind farm in Noordzee (you can see it from ESTEC on a clear day). Look for Shell International Renewables, other subsidiaries and joint-ventures. I guess, the report is about the oil branch. http://sustainabilityreport.shell.com/2009/servicepages/downloads/files/all_shell_sr09.pdf http://www.noordzeewind.nl/
  •  
    no - its about Shell globally - all Shell .. these participations are just peanuts please read the intro of the CEO in the pdf you linked to: he does not even mention renewables! their entire sustainability strategy is about oil and gas - just making it (look) nicer and environmentally friendlier
  •  
    Fair enough, for me even peanuts are worthy and I am not able to judge. Not all big-profit companies, like Shell, are evil :( Look in the pdf what is in the upstream and downstream you mentionned above. Non-shell sources for examples and more objectivity: http://www.nuon.com/company/Innovative-projects/noordzeewind.jsp http://www.e-energymarket.com/news/single-news/article/ferrari-tops-bahrain-gp-using-shell-biofuel.html thanks.
LeopoldS

BBC News - Speed-of-light experiments give baffling result at Cern - 5 views

  •  
    Sante, Luzi have a look at this???!!!
  • ...3 more comments...
  •  
    and here's the xkcd on it: http://xkcd.com/955/
  •  
    And here's the arXiv paper http://arxiv.org/abs/1109.4897 Serious? Difficult to say. I'm theorist and can't really rate their measurement techniques. Certainly be cautious, mostly such things disappear faster than they appeared.
  •  
    it took them 3 years to "appear"!
  •  
    Leo, you mean that they measured 3 years? That's not a point to criticize: since the only interaction of neutrinos with matter is the Weak Interaction (which is indeed very, very weak), it is extremely hard to get a reasonable statistic. By the same reason, it's essentially impossible to shield the experiment from the background. And this background (solar neutrinos, cosmic radiation neutrinos) is huge.
  •  
    for sure a result to be taken seriously. It makes a buzz in my lab... but always be cautious with this kind of declaration, that hugely violates all physics we know and even most of the reasonable alternative theories... Remember the Pionneer anomaly for which it took almost ten years to set up that finally its a thermal effect.
Luís F. Simões

Wind Power Without the Blades: Big Pics : Discovery News - 4 views

  • The carbon-fiber stalks, reinforced with resin, are about a foot wide at the base tapering to about 2 inches at the top. Each stalk will contain alternating layers of electrodes and ceramic discs made from piezoelectric material, which generates a current when put under pressure. In the case of the stalks, the discs will compress as they sway in the wind, creating a charge.
  • Based on rough estimates, said Núñez-Ameni the output would be comparable to that of a conventional wind farm covering the same area
  • After completion, a Windstalk should be able to produce as much electricity as a single wind turbine, with the advantage that output could be increased with a denser array of stalks. Density is not possible with conventional turbines, which need to be spaced about three times the rotor's diameter in order to avoid air turbulence. But Windstalks work on chaos and turbulence so they can be installed much closer together, said Núñez-Ameni.
  • ...1 more annotation...
  • Núñez-Ameni also reports that the firm is currently working on taking the Windstalk idea underwater. Called Wavestalk, the whole system would be inverted to harness energy from the flow of ocean currents and waves.
  •  
    additional information: http://atelierdna.com/?p=144
  •  
    isn't this a bit of a contradiction: on the one hand: "Based on rough estimates, said Núñez-Ameni the output would be comparable to that of a conventional wind farm covering the same area" and on the other: "After completion, a Windstalk should be able to produce as much electricity as a single wind turbine, with the advantage that output could be increased with a denser array of stalks. Density is not possible with conventional turbines, which need to be spaced about three times the rotor's diameter in order to avoid air turbulence. " still, very interesting concept!
nikolas smyrlakis

SPACE.com -- White House Panel Backs Commercial Alternatives to NASA's New Rocket - 1 views

  •  
    "Seeking a Human Spaceflight Program Worthy of a Great Nation" - NASA's and Lockheed Martin's 2nd thoughts of returning to the moon..
Joris _

The Space Review: Breaking up may be good to do - 6 views

shared by Joris _ on 03 Nov 09 - Cached
LeopoldS liked it
  •  
    I especially like " The program will also create a "developer's kit" of open hardware and software specifications to make it easier for new components to integrate into such fractionated systems." Joris: wanna take the lead on having a closer look on this, I definitely would like to be part of it and happy to contribute, possibly also Juxi? - first assessment by Christmas realistic?
  •  
    I think it a very interesting approach. If you google "darpa F6", you should see that a lot seems to be on-going. So, should we do something about it before having the conclusions of the Darpa study ?
  •  
    wait and see is never a good approach in these cases .... first step has to be anyway to understand what they are up to and then to think about our own ideas on it, own approaches, alternatives and then to see what we can do specifically in the team on it.
‹ Previous 21 - 40 of 51 Next ›
Showing 20 items per page