Skip to main content

Home/ Advanced Concepts Team/ Group items tagged small

Rss Feed Group items tagged

duncan barker

Can WISE find the hypothetical 'Tyche'? - 0 views

  •  
    "In November 2010, the scientific journal Icarus published a paper by astrophysicists John Matese and Daniel Whitmire, who proposed the existence of a binary companion to our sun, larger than Jupiter, in the long-hypothesized "Oort cloud" -- a faraway repository of small icy bodies at the edge of our solar system. The researchers use the name "Tyche" for the hypothetical planet. Their paper argues that evidence for the planet would have been recorded by the Wide-field Infrared Survey Explorer (WISE)."
Luís F. Simões

Shell energy scenarios to 2050 - 6 views

  •  
    just in case you were feeling happy and optimistic
  • ...7 more comments...
  •  
    An energy scenario published by an oil company? Allow me to be sceptical...
  •  
    Indeed, Shell is an energy company, not just oil, for some time now ... The two scenarii are, in their approach, dependant of economic and political situation, which is right now impossible to forecast. Reference to Kyoto is surprising, almost out-dated! But overall, I find it rather optimistic at some stages, and probably the timeline (p37-39) is unlikely with recent events.
  •  
    the report was published in 2008, which explains the reference to Kyoto, as the follow-up to it was much more uncertain at that point. The Blueprint scenario is indeed optimistic, but also quite unlikely I'd say. I don't see humanity suddenly becoming so wise and coordinated. Sadly, I see something closer to the Scramble scenario as much more likely to occur.
  •  
    not an oil company??? please have a look at the percentage of their revenues coming from oil and gas and then compare this with all their other energy activities together and you will see very quickly that it is only window dressing ... they are an oil and gas company ... and nothing more
  •  
    not JUST oil. From a description: "Shell is a global group of energy and petrochemical companies." Of course revenues coming from oil are the biggest, the investment turnover on other energy sources is small for now. Knowing that most of their revenues is from an expendable source, to guarantee their future, they invest elsewhere. They have invested >1b$ in renewable energy, including biofuels. They had the largest wind power business among so-called "oil" companies. Oil only defines what they do "best". As a comparison, some time ago, Apple were selling only computers and now they sell phones. But I would not say Apple is just a phone company.
  •  
    window dressing only ... e.g. Net cash from operating activities (pre-tax) in 2008: 70 Billion$ net income in 2008: 26 Billion revenues in 2008: 88 Billion Their investments and revenues in renewables don't even show up in their annual financial reports since probably they are under the heading of "marketing" which is already 1.7 Billion $ ... this is what they report on their investments: Capital investment, portfolio actions and business development Capital investment in 2009 was $24 billion. This represents a 26% decrease from 2008, which included over $8 billion in acquisitions, primarily relating to Duvernay Oil Corp. Capital investment included exploration expenditure of $4.5 billion (2008: $11.0 billion). In Abu Dhabi, Shell signed an agreement with Abu Dhabi National Oil Company to extend the GASCO joint venture for a further 20 years. In Australia, Shell and its partners took the final investment decision (FID) for the Gorgon LNG project (Shell share 25%). Gorgon will supply global gas markets to at least 2050, with a capacity of 15 million tonnes (100% basis) of LNG per year and a major carbon capture and storage scheme. Shell has announced a front-end engineering and design study for a floating LNG (FLNG) project, with the potential to deploy these facilities at the Prelude offshore gas discovery in Australia (Shell share 100%). In Australia, Shell confirmed that it has accepted Woodside Petroleum Ltd.'s entitlement offer of new shares at a total cost of $0.8 billion, maintaining its 34.27% share in the company; $0.4 billion was paid in 2009 with the remainder paid in 2010. In Bolivia and Brazil, Shell sold its share in a gas pipeline and in a thermoelectric power plant and its related assets for a total of around $100 million. In Canada, the Government of Alberta and the national government jointly announced their intent to contribute $0.8 billion of funding towards the Quest carbon capture and sequestration project. Quest, which is at the f
  •  
    thanks for the info :) They still have their 50% share in the wind farm in Noordzee (you can see it from ESTEC on a clear day). Look for Shell International Renewables, other subsidiaries and joint-ventures. I guess, the report is about the oil branch. http://sustainabilityreport.shell.com/2009/servicepages/downloads/files/all_shell_sr09.pdf http://www.noordzeewind.nl/
  •  
    no - its about Shell globally - all Shell .. these participations are just peanuts please read the intro of the CEO in the pdf you linked to: he does not even mention renewables! their entire sustainability strategy is about oil and gas - just making it (look) nicer and environmentally friendlier
  •  
    Fair enough, for me even peanuts are worthy and I am not able to judge. Not all big-profit companies, like Shell, are evil :( Look in the pdf what is in the upstream and downstream you mentionned above. Non-shell sources for examples and more objectivity: http://www.nuon.com/company/Innovative-projects/noordzeewind.jsp http://www.e-energymarket.com/news/single-news/article/ferrari-tops-bahrain-gp-using-shell-biofuel.html thanks.
pacome delva

Atomic clock is smallest on the market - 2 views

  •  
    Soon caesium in your watch !
  •  
    very nice indeed ... how much more accurate are our galileo clocks?
  •  
    This small clock is around 10^-12 @1d in stability (loose 1 second after 300000 years) and 50 ns in accuracy. For comparison Galileo and GPS clocks are around 10^-14 @1d in stability and 1 ns in accuracy. And ACES/PHARAO will be around 3*10^-16 @1d in stability and 0.3 ps accuracy.
ESA ACT

Microscale Environments Could Be Probed By Super Small Nanoelectrodes - 0 views

  •  
    Investigating the composition and behavior of microscale environments, including those within living cells, could become easier and more precise with nanoelectrodes being developed at the University of Illinois.
Francesco Biscani

Intel Shows 48-Core x86 Processor - 1 views

  •  
    Finally a massively multi-core general-purpose architecture.
  •  
    Well, nice, but I wonder how much cache per core will be available... With 48 cores a single memory bus becomes nothing more than one big (small? :) ) bottleneck.
  •  
    Apparently they have separated L2 cache per-tile (i.e., every two processors) and a high speed bus connecting the tiles. As usual, whether it will be fast enough will depend from the specific applications (which BTW is also true for other current multi-core architectures). The nice thing is of course that porting software to this architecture will be one order of magnitude less difficult than going to Tesla/Fermi/CELL architectures. Also, this architecture will also be suitable for other tasks than floating point computations (damn engineers polluting computer science :P) and it has the potential to be more future-proof than other solutions.
pacome delva

Transparent material opens a new window on solar energy - physicsworld.com - 4 views

  • Researchers in the US have developed a new kind of organic solar cell that converts a small but significant fraction of the sunlight that falls onto it into electricity, while still allowing most of the visible part of that light to pass through. Thanks to this transparency, the team says that the cell could be mounted onto windows in buildings or cars in order to tap a currently under-exploited source of energy.
dejanpetkow

Bioengineering to generate healthy skin - 1 views

  • That is, using a small biopsy from a specific patient, they can generate almost the entire cutaneous surface of that individual in the lab.
  • that it is possible to isolate epidermic stem cells from patients with different genetic skin diseases, cultivate them and, using molecular engineering as a first step, incorporate the therapeutic genes into each patient's genome to take the place of the one that the patient does not have or that functions abnormally. Afterwards, in the second step, the stem cells would be assembled into patches ready to be transplanted onto the patients.
  • "What we did in this case -- explains Marcela del Río -- was to transfer a normal SPINK-5 gene to a patient's stem cells and later use these cells to generate skin that could be transplanted to experimental models, such as mice."
  •  
    Nice approach to generate healthy skin and to patch parts or to replace the overall human skin. Next step - clinical studies.
dejanpetkow

Torsional Carbon Nanotube Artificial Muscles - 0 views

  • Actuator materials producing rotation are rare and demonstrated rotations are small, though rotary systems like electric motors, pumps, turbines and compressors are widely needed and utilized. Present motors can be rather complex and, therefore, difficult to miniaturize. We show that a short electrolyte-filled twist spun carbon nanotube yarn, which is much thinner than a human hair, functions as a torsional artificial muscle in a simple three-electrode electrochemical system, providing a reversible 15,000° rotation and 590 revolutions/minute. A hydrostatic actuation mechanism, like for nature’s muscular hydrostats, explains the simultaneous occurrence of lengthwise contraction and torsional rotation during the yarn volume increase caused by electrochemical double-layer charge injection. Use of a torsional yarn muscle as a mixer for a fluidic chip is demonstrated.
  •  
    I have no access to the pdf, but abstract sounds interesting.
Thijs Versloot

Google plans for internet from space with 180 LEO satellites - 2 views

  •  
    The Wall Street Journal hears that the search firm is preparing to build 180 "small, high capacity" satellites that will go into low orbit and provide internet connections to underserved areas
  •  
    sorry, did not see your earlier post ... fully altruistic move of course as they claim
  •  
    actually I posted about it first :P
H H

Lockheed Claims Breakthrough on Fusion Energy - 3 views

  •  
    Lockheed Martin Corp said on Wednesday it had made a technological breakthrough in developing a power source based on nuclear fusion, and the first reactors, small enough to fit on the back of a truck, could be ready in a decade.
  •  
    Is positive news definitely, but when it comes to fusion energy being skeptic is the wisest course of action. On the back of a truck basically means that the energy density of the device is such that the surface energy of a (steady state) plasma is beyond currents materials limits. It is probably a pulsed (ignitor-type) device, which are prone to instabilities. That could therefore refer to their breakthrough claim, of which they hardly say anything though. Lets see
  •  
    Since there were already a lot of false alarms on the field, one should definitively be careful about it. However not a single detail about its structure or how it works in this new... Here http://www.fusenet.eu/node/400 you can find a bit more of information about it.
Thijs Versloot

NASA's 'Swarmies' are a squad of smaller, less intelligent rovers - 0 views

  •  
    Typically, we send rovers to our planetary neighbors one at a time -- but what if we sent a small team of smaller, less impressive robots instead? That's the idea NASA is exploring at Kennedy Space Center with Swarmies: a quartet of four autonomous robots designed to work together to complete a single mission.
Thijs Versloot

The challenges of Big Data analysis @NSR_Family - 2 views

  •  
    Big Data bring new opportunities to modern society and challenges to data scientists. On the one hand, Big Data hold great promises for discovering subtle population patterns and heterogeneities that are not possible with small-scale data. On the other hand, the massive sample size and high dimensionality of Big Data introduce unique computational and statistical challenges, including scalability and storage bottleneck, noise accumulation, spurious correlation, incidental endogeneity and measurement errors. These challenges are distinguished and require new computational and statistical paradigm. This paper gives overviews on the salient features of Big Data and how these features impact on paradigm change on statistical and computational methods as well as computing architectures.
Thijs Versloot

Reality - Almost No Patented Discoveries Ever Get Used @WIRED - 3 views

  •  
    The unspoken reality is that the U.S. patent system creates a market so constricted by high transaction costs and legal risks that it excludes the vast majority of small and mid-sized businesses and prevents literally 95 percent of all patented discoveries from ever being put to use to create new products and services, new jobs, and new economic growth.
jcunha

Wireless 10 kW power transmission - 1 views

  •  
    Mitsubishi Heavy Industries said Friday that it has succeeded in transmitting 10 kW of power through 500 m. An announcement that comes just after JAXA scientists reported one more breakthrough in the quest for Space Solar Power Systems (http://phys.org/news/2015-03-japan-space-scientists-wireless-energy.html). One step closer to Power Generation from Space/
  •  
    from the press release (https://www.mhi-global.com/news/story/1503121879.html) "10 kilowatts (kW) of power was sent from a transmitting unit by microwave. The reception of power was confirmed at a receiver unit located at a distance of 500 meters (m) away by the illumination of LED lights, using part of power transmitted". So 10kW of transmission to light a few efficient LED lights??? In a 2011 report (https://www.mhi-global.com/company/technology/review/pdf/e484/e484017.pdf), MHI estimated this would generate the same electricity output as a 400-megawatt thermal plant - or enough to serve more than 150,000 homes during peak hours. The price? The same as publicly supplied power, according to its calculations. There are no results to boost these claims however. The main work they do now is focused on beam steering control. I guess the real application in mind is more targeted to terrestrial applications, eg wireless highway charging (http://www.bbc.com/future/story/20120312-wireless-highway-to-charge-cars). With the distances so much shorter, leading to much smaller antenna's and rectenna's this makes much more sense to me to develop.
Thijs Versloot

Watch uranium radiation inside a cloud chamber - 6 views

  •  
    Ever wondered what radiation looks like? If you have, I bet you didn't think it would look as cool as this. This is a small piece of uranium mineral sitting in a cloud chamber, which means you can see the process of decay and radiation emission....
  •  
    Once I saw a DIY spark chamber in LIP (CERN associated laboratory). It was the work of a bunch of BSc students, they made it all from scratch, so it seemed to be not that difficult to have one at home. Yet another project for the future 'Experimental Physics' stagiare maybe :)
jaihobah

The Nanodevice Aiming to Replace the Field Effect Transistor - 2 views

  •  
    very nice! "For a start, the wires operate well as switches that by some measures compare well to field effect transistors. For example they allow a million times more current to flow when they are on compared with off when operating at a voltage of about 1.5 V. "[A light effect transistor] can replicate the basic switching function of the modern field effect transistor with competitive (and potentially improved) characteristics," say Marmon and co. But they wires also have entirely new capabilities. The device works as an optical amplifier and can also perform basic logic operations by using two or more laser beams rather than one. That's something a single field effect transistor cannot do."
  • ...1 more comment...
  •  
    The good thing about using CdSe NW (used here) is that they show a photon-to-current efficiency window around the visible wavelengths, therefore any visible light can in principle be used in this application to switch the transistor on/off. I don't agree with the moto "Nanowires are also simpler than field effect transistors and so they're potentially cheaper and easier to make." Yes, they are simple, yet for applications, fabricating devices with them consistently is very challenging (being the research effort not cheap at all..) and asks for improvements and breakthroughs in the fabrication process.
  •  
    any idea how the shine the light selectively to such small surfaces?
  •  
    "Illumination sources consisted of halogen light, 532.016, 441.6, and 325 nm lasers ported through a Horiba LabRAM HR800 confocal Raman system with an internal 632.8 nm laser. Due to limited probe spacing for electrical measurements, all illumination sources were focused through a 50x long working distance (LWD) objective lens (N.A. = 0.50), except 325 nm, which went through a 10x MPLAN objective lens (N.A. = 0.25)." Laser spot size calculated from optical diffraction formula 1.22*lambda/NA
Marcus Maertens

[1703.00045] Aggregated knowledge from a small number of debates outperforms the wisdom... - 1 views

  •  
    Wisdom of crowds under a new perspective: a motivation for the island model?
Dario Izzo

Miguel Nicolelis Says the Brain Is Not Computable, Bashes Kurzweil's Singularity | MIT ... - 9 views

  •  
    As I said ten years ago and psychoanalysts 100 years ago. Luis I am so sorry :) Also ... now that the commission funded the project blue brain is a rather big hit Btw Nicolelis is a rather credited neuro-scientist
  • ...14 more comments...
  •  
    nice article; Luzi would agree as well I assume; one aspect not clear to me is the causal relationship it seems to imply between consciousness and randomness ... anybody?
  •  
    This is the same thing Penrose has been saying for ages (and yes, I read the book). IF the human brain proves to be the only conceivable system capable of consciousness/intelligence AND IF we'll forever be limited to the Turing machine type of computation (which is what the "Not Computable" in the article refers to) AND IF the brain indeed is not computable, THEN AI people might need to worry... Because I seriously doubt the first condition will prove to be true, same with the second one, and because I don't really care about the third (brains is not my thing).. I'm not worried.
  •  
    In any case, all AI research is going in the wrong direction: the mainstream is not on how to go beyond Turing machines, rather how to program them well enough ...... and thats not bringing anywhere near the singularity
  •  
    It has not been shown that intelligence is not computable (only some people saying the human brain isn't, which is something different), so I wouldn't go so far as saying the mainstream is going in the wrong direction. But even if that indeed was the case, would it be a problem? If so, well, then someone should quickly go and tell all the people trading in financial markets that they should stop using computers... after all, they're dealing with uncomputable undecidable problems. :) (and research on how to go beyond Turing computation does exist, but how much would you want to devote your research to a non existent machine?)
  •  
    [warning: troll] If you are happy with developing algorithms that serve the financial market ... good for you :) After all they have been proved to be useful for humankind beyond any reasonable doubt.
  •  
    Two comments from me: 1) an apparently credible scientist takes Kurzweil seriously enough to engage with him in polemics... oops 2) what worries me most, I didn't get the retail store pun at the end of article...
  •  
    True, but after Google hired Kurzweil he is de facto being taken seriously ... so I guess Nicolelis reacted to this.
  •  
    Crazy scientist in residence... interesting marketing move, I suppose.
  •  
    Unfortunately, I can't upload my two kids to the cloud to make them sleep, that's why I comment only now :-). But, of course, I MUST add my comment to this discussion. I don't really get what Nicolelis point is, the article is just too short and at a too popular level. But please realize that the question is not just "computable" vs. "non-computable". A system may be computable (we have a collection of rules called "theory" that we can put on a computer and run in a finite time) and still it need not be predictable. Since the lack of predictability pretty obviously applies to the human brain (as it does to any sufficiently complex and nonlinear system) the question whether it is computable or not becomes rather academic. Markram and his fellows may come up with a incredible simulation program of the human brain, this will be rather useless since they cannot solve the initial value problem and even if they could they will be lost in randomness after a short simulation time due to horrible non-linearities... Btw: this is not my idea, it was pointed out by Bohr more than 100 years ago...
  •  
    I guess chaos is what you are referring to. Stuff like the Lorentz attractor. In which case I would say that the point is not to predict one particular brain (in which case you would be right): any initial conditions would be fine as far as any brain gets started :) that is the goal :)
  •  
    Kurzweil talks about downloading your brain to a computer, so he has a specific brain in mind; Markram talks about identifying neural basis of mental diseases, so he has at least pretty specific situations in mind. Chaos is not the only problem, even a perfectly linear brain (which is not a biological brain) is not predictable, since one cannot determine a complete set of initial conditions of a working (viz. living) brain (after having determined about 10% the brain is dead and the data useless). But the situation is even worse: from all we know a brain will only work with a suitable interaction with its environment. So these boundary conditions one has to determine as well. This is already twice impossible. But the situation is worse again: from all we know, the way the brain interacts with its environment at a neural level depends on his history (how this brain learned). So your boundary conditions (that are impossible to determine) depend on your initial conditions (that are impossible to determine). Thus the situation is rather impossible squared than twice impossible. I'm sure Markram will simulate something, but this will rather be the famous Boltzmann brain than a biological one. Boltzman brains work with any initial conditions and any boundary conditions... and are pretty dead!
  •  
    Say one has an accurate model of a brain. It may be the case that the initial and boundary conditions do not matter that much in order for the brain to function an exhibit macro-characteristics useful to make science. Again, if it is not one particular brain you are targeting, but the 'brain' as a general entity this would make sense if one has an accurate model (also to identify the neural basis of mental diseases). But in my opinion, the construction of such a model of the brain is impossible using a reductionist approach (that is taking the naive approach of putting together some artificial neurons and connecting them in a huge net). That is why both Kurzweil and Markram are doomed to fail.
  •  
    I think that in principle some kind of artificial brain should be feasible. But making a brain by just throwing together a myriad of neurons is probably as promising as throwing together some copper pipes and a heap of silica and expecting it to make calculations for you. Like in the biological system, I suspect, an artificial brain would have to grow from a small tiny functional unit by adding neurons and complexity slowly and in a way that in a stable way increases the "usefulness"/fitness. Apparently our brain's usefulness has to do with interpreting inputs of our sensors to the world and steering the body making sure that those sensors, the brain and the rest of the body are still alive 10 seconds from now (thereby changing the world -> sensor inputs -> ...). So the artificial brain might need sensors and a body to affect the "world" creating a much larger feedback loop than the brain itself. One might argue that the complexity of the sensor inputs is the reason why the brain needs to be so complex in the first place. I never quite see from these "artificial brain" proposals in how far they are trying to simulate the whole system and not just the brain. Anyone? Or are they trying to simulate the human brain after it has been removed from the body? That might be somewhat easier I guess...
  •  
    Johannes: "I never quite see from these "artificial brain" proposals in how far they are trying to simulate the whole system and not just the brain." In Artificial Life the whole environment+bodies&brains is simulated. You have also the whole embodied cognition movement that basically advocates for just that: no true intelligence until you model the system in its entirety. And from that you then have people building robotic bodies, and getting their "brains" to learn from scratch how to control them, and through the bodies, the environment. Right now, this is obviously closer to the complexity of insect brains, than human ones. (my take on this is: yes, go ahead and build robots, if the intelligence you want to get in the end is to be displayed in interactions with the real physical world...) It's easy to dismiss Markram's Blue Brain for all their clever marketing pronouncements that they're building a human-level consciousness on a computer, but from what I read of the project, they seem to be developing a platfrom onto which any scientist can plug in their model of a detail of a detail of .... of the human brain, and get it to run together with everyone else's models of other tiny parts of the brain. This is not the same as getting the artificial brain to interact with the real world, but it's a big step in enabling scientists to study their own models on more realistic settings, in which the models' outputs get to effect many other systems, and throuh them feed back into its future inputs. So Blue Brain's biggest contribution might be in making model evaluation in neuroscience less wrong, and that doesn't seem like a bad thing. At some point the reductionist approach needs to start moving in the other direction.
  •  
    @ Dario: absolutely agree, the reductionist approach is the main mistake. My point: if you take the reductionsit approach, then you will face the initial and boundary value problem. If one tries a non-reductionist approach, this problem may be much weaker. But off the record: there exists a non-reductionist theory of the brain, it's called psychology... @ Johannes: also agree, the only way the reductionist approach could eventually be successful is to actually grow the brain. Start with essentially one neuron and grow the whole complexity. But if you want to do this, bring up a kid! A brain without body might be easier? Why do you expect that a brain detached from its complete input/output system actually still works. I'm pretty sure it does not!
  •  
    @Luzi: That was exactly my point :-)
Nicholas Lan

The Future… One Hundred Years Ago - 13 views

  •  
    one of these again. french illustrations from 1910 of life in the year 2000. some pleasingly close. a lot of flying and robots. some inexplicable (bunch of people staring at a horse). some bmi.
  • ...5 more comments...
  •  
    I like them again and again ....
  •  
    what would be todays equivalents?
  •  
    Ha! The one about the horse is that "in 100 years there will be people who've never seen a live horse in their lives" :-) Actually it's more than true now with children asking my mother who works in the school "so, do those kangaroos really exist"? Children are fed with so much realistic BS on TV (dinosaur parks etc.) that they can hardly tell the difference between fiction and reality. If you already have offspring: have they seen, say, a live cow or chicken already? (This is most probably a reference to the quote: "Horse is as everyone can see")
  •  
    >what would be todays equivalents? Hmmm... what about technology forecasts?
  •  
    ah. that makes sense. what about the one where they're having dinner then?
  •  
    No idea... another one I don't get is the one with the waiter presenting some small black-white thing to the white hair guy on a chair.
  •  
    love the clockwork orange one
Alexander Wittig

The Whorfian Time Warp: Representing Duration Through the Language Hourglass. - 0 views

  •  
    How do humans construct their mental representations of the passage of time? The universalist account claims that abstract concepts like time are universal across humans. In contrast, the linguistic relativity hypothesis holds that speakers of different languages represent duration differently. The precise impact of language on duration representation is, however, unknown. Here, we show that language can have a powerful role in transforming humans' psychophysical experience of time. Contrary to the universalist account, we found language-specific interference in a duration reproduction task, where stimulus duration conflicted with its physical growth. When reproducing duration, Swedish speakers were misled by stimulus length, and Spanish speakers were misled by stimulus size/quantity. These patterns conform to preferred expressions of duration magnitude in these languages (Swedish: long/short time; Spanish: much/small time). Critically, Spanish-Swedish bilinguals performing the task in both languages showed different interference depending on language context. Such shifting behavior within the same individual reveals hitherto undocumented levels of flexibility in time representation. Finally, contrary to the linguistic relativity hypothesis, language interference was confined to difficult discriminations (i.e., when stimuli varied only subtly in duration and growth), and was eliminated when linguistic cues were removed from the task. These results reveal the malleable nature of human time representation as part of a highly adaptive information processing system.
« First ‹ Previous 81 - 100 of 101 Next ›
Showing 20 items per page