Skip to main content

Home/ Advanced Concepts Team/ Group items matching "architectures" in title, tags, annotations or url

Group items matching
in title, tags, annotations or url

Sort By: Relevance | Date Filter: All | Bookmarks | Topics Simple Middle
7More

Picture This: NVIDIA GPUs Sort Through Tens of Millions of Flickr Photos - 2 views

  •  
    Strange and exotic cityscapes. Desolate wilderness areas. Dogs that look like wookies. Flickr, one of the world's largest photo sharing services, sees it all. And, now, Flickr's image recognition technology can categorize more than 11 billion photos like these. And it does it automatically. It's called "Magic View." Magical deep learning! Buzzword attack!
  • ...4 more comments...
  •  
    and here comes my standard question: how can we use this for space? fast detection of natural disasters onboard?
  •  
    Even on ground. You could for example teach it what nuclear reactors or missiles or other weapons you don't want look like on satellite pictures and automatically scan the world for them (basically replacing intelligence analysts).
  •  
    In fact, I think this could make a nice ACT project: counting seals from satellite imagery is an actual (and quite recent) thing: http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0092613 In this publication they did it manually from a GeoEye 1 b/w image, which sounds quite tedious. Maybe one can train one of those image recognition algorithms to do it automatically. Or maybe it's a bit easier to count larger things, like elephants (also a thing).
  •  
    In HiPEAC (High Performance, embedded architecture and computation) conference I attended in the beginning of this year there was a big trend of CUDA GPU vs FPGA for hardware accelerated image processing. Most of it orbitting around discussing who was faster and cheaper with people from NVIDIA in one side and people from Xilinx and Intel in the other. I remember of talking with an IBM scientist working on hardware accelerated data processing working together with the Radio telescope institute in Netherlands about the solution where they working on (GPU CUDA). I gathered that NVIDIA GPU suits best in applications that somehow do not rely in hardware, having the advantage of being programmed in a 'easy' way accessible to a scientist. FPGA's are highly reliable components with the advantage of being available in radhard versions, but requiring specific knowledge of physical circuit design and tailored 'harsh' programming languages. I don't know what is the level of rad hardness in NVIDIA's GPUs... Therefore FPGAs are indeed the standard choice for image processing in space missions (a talk with the microelectronics department guys could expand on this), whereas GPUs are currently used in some ground based (radio astronomy or other types of telescopes). I think that on for a specific purpose as the one you mentioned, this FPGA vs GPU should be assessed first before going further.
  •  
    You're forgetting power usage. GPUs need 1000 hamster wheels worth of power while FPGAs can run on a potato. Since space applications are highly power limited, putting any kind of GPU monster in orbit or on a rover is failed idea from the start. Also in FPGAs if a gate burns out from radiation you can just reprogram around it. Looking for seals offline in high res images is indeed definitely a GPU task.... for now.
  •  
    The discussion of how to make FPGA hardware acceleration solutions easier to use for the 'layman' is starting btw http://reconfigurablecomputing4themasses.net/.
1More

Introducing A Brain-inspired Computer [IBM TrueNorth] - 0 views

  •  
    Built in Silicon technology (Samsung's 28 nm process), its power is measured as one million neurons and 256 million synapses. It contains 5.4 million transistor being the largest IBM chip in these terms. All this said, it consumes less than 100 mW!! "These systems can efficiently process high-dimensional, noisy sensory data in real time, while consuming orders of magnitude less power than conventional computer architectures." IBM is working with initLabs to integrate the DVS retinal camera with these chips = real time image neuro-like image processing. In what seems to be a very successful project hugely funded by DARPA, "Our sights are now set high on the ambitious goal of integrating 4,096 chips in a single rack with 4 billion neurons and 1 trillion synapses while consuming ~4kW of power."
3More

InfoQ: A Crash Course in Modern Hardware - 3 views

  •  
    for francesco ;) though i guess he knows it all already so for the others who wanna know too
  •  
    Cool, lots of useful info in there. Though, never having programmed in Java before, I wonder if one can go that low-level in Java?
  •  
    oh I don't think so but it is interesting for the JVM I guess
5More

Tilera Corporation - 2 views

  •  
    who wants 100 cores ... future of PAGMO?
  • ...2 more comments...
  •  
    Well nVidia provides 10.000 "cores" in a single rack on thei Teslas...
  •  
    remember that you were recommending its purchase already some time ago ... still strong reasons to do so?
  •  
    The problem with this flurry of activity today regarding multicore architectures is that it is really unclear which one will be the winner in the long run. Never understimate the power of inertia, especially in the software industry (after all, people are still programming in COBOL and Fortran today). For instance, NVIDIA gives you the Teslas with 10000 cores, but then you have to rewrite extensive parts of your code in order to take advantage of this. Is this an investment worth undertaking? Difficult to say, it would certainly be if the whole software world moves into that direction (which is not happening - yet?). But then you have other approaches coming out, suche as the Cell processor by IBM (the one on the PS3) which has really impressive floating point performance and, of course, a completely different programming model. The nice thing about this Tilera processor seems to be that it is a general-purpose processor, which may not require extensive re-engineering of existing code (but I'm really hypothesizing, since the thechincal details are not very abundant on their website).
  •  
    Moreover PaGMO computation model is more towards systems with distributed memory, and not with shared memory (i.e. multi-core). In the latter, at certain point the memory access becomes the bottleneck.
3More

DIRECT - Wikipedia, the free encyclopedia - 0 views

  • DIRECT is a proposed alternative Shuttle-Derived Launch Vehicle architecture supporting NASA's Vision for Space Exploration, which would replace the space agency's planned Ares I and Ares V rockets with a family of launch vehicles named "Jupiter."
  • DIRECT is advocated by a group of space enthusiasts that asserts it represents a broader team of dozens of NASA and space industry engineers who actively work on the proposal on an anonymous, volunteer basis in their spare time.
  •  
    Just read about this, it looks like an interesting example of bottom-up innovation and self-organization.
4More

The Origin of Artificial Species: Creating Artificial Personalities - 0 views

  • The first artificial creature to receive the genomic personality is Rity, a dog-like software character that lives in a virtual 3D world in a PC
  • In Rity, internal states such as motivation, homeostasis and emotion change according to the incoming perception
  • The internal control architecture processes incoming sensor information, calculates each value of internal states as its response, and sends the calculated values to the behavior selection module to generate a proper behavior.
  •  
    they have found Christina's dog !!
1More

I want to understand Ning's architecture and how it works - Ning Documentation - 0 views

shared by ESA ACT on 24 Apr 09 - Cached
  •  
    can we use this for our planned network and also host it on our server?? (LS)
1More

The challenges of Big Data analysis @NSR_Family - 2 views

  •  
    Big Data bring new opportunities to modern society and challenges to data scientists. On the one hand, Big Data hold great promises for discovering subtle population patterns and heterogeneities that are not possible with small-scale data. On the other hand, the massive sample size and high dimensionality of Big Data introduce unique computational and statistical challenges, including scalability and storage bottleneck, noise accumulation, spurious correlation, incidental endogeneity and measurement errors. These challenges are distinguished and require new computational and statistical paradigm. This paper gives overviews on the salient features of Big Data and how these features impact on paradigm change on statistical and computational methods as well as computing architectures.
1More

Bio-Derived Porous Carbon Anodes for Li-ion Batteries #Nature - 3 views

  •  
    Here we explore the electrochemical performance of pyrolyzed skins from the species A. bisporus, also known as the Portobello mushroom, as free-standing, binder-free, and current collector-free Li-ion battery anodes. At temperatures above 900 °C, the biomass-derived carbon nanoribbon-like architectures undergo unique processes to become hierarchically porous. Basically they burned a Portobello mushroom and used it as a battery... now thats an multidisciplinary advanced concept
2More

Google Just Open Sourced the Artificial Intelligence Engine at the Heart of Its Online ... - 2 views

  •  
    TensorFlow is an open source software library for numerical computation using data flow graphs. Nodes in the graph represent mathematical operations, while the graph edges represent the multidimensional data arrays (tensors) communicated between them. The flexible architecture allows you to deploy computation to one or more CPUs or GPUs in a desktop, server, or mobile device with a single API. TensorFlow was originally developed by researchers and engineers working on the Google Brain Team within Google's Machine Intelligence research organization for the purposes of conducting machine learning and deep neural networks research, but the system is general enough to be applicable in a wide variety of other domains as well.
  •  
    And the interface even looks a bit less retarded than theano
1More

Rocking puts adults to sleep faster and makes slumber deeper | Science News - 2 views

  •  
    First really strong evidence that the vestibular system affects sleep architecture, sleep stability and sleep spindles. If there is an effect due to a changing acceleration there might also be an effect of no gravity vector. We'll find out when I get the space shuttle data.
4More

Synthesis of Carbon Nanofibres direct from CO2 atmosphere - 9 views

  •  
    It may be feasible to use this in the Marsian atmosphere (9mbar CO2) to directly grow Carbon Nanofibres for infrastructural needs
  • ...1 more comment...
  •  
    This is clearly interesting for the new YGT on Space Architecture (with background on fabrics) that comes in October. Since I was asked to provide input here, this could be a solid ground to start with. Thanks. :)
  •  
    nice!
  •  
    gave it to Hanna, she is looking into it now. Manchester and Ghent University could be potential collaborators.
1More

Japanese Space Research Center will be Suspended Over a Moonlike Crater - 1 views

  •  
    They are developing so-called "Avatar" technology which will allow people to control robots remotely, as in the movie "Avatar." With Avatar X, they hope to revolutionize space exploration, resource extraction, and other space-based activities. On the Avatar X website, it says, "AVATAR X aims to capitalize on the growing space-based economy by accelerating development of real-world Avatars that will enable humans to remotely build camps on the Moon, support long-term space missions and further explore space from afar."
1More

A mission architecture to reach and operate at the focal region of the solar gravitatio... - 1 views

  •  
    Thoughts? :)
‹ Previous 21 - 37 of 37
Showing 20 items per page