Skip to main content

Home/ Coders/ Group items tagged algorithms computer

Rss Feed Group items tagged

Fabien Cadet

MIT's Introduction to Algorithms, Lectures 22 and 23: Cache Oblivious Algorithms - good... - 0 views

  • Cache-oblivious algorithms should not be confused with cache-aware algorithms. Cache-aware algorithms and data structures explicitly depend on various hardware configuration parameters, such as the cache size. Cache-oblivious algorithms do not depend on any hardware parameters.
  • An example of cache-aware (not cache-oblivious) data structure is a B-Tree that has the explicit parameter B, the size of a node. The main disadvantage of cache-aware algorithms is that they are based on the knowledge of the memory structure and size, which makes it difficult to move implementations from one architecture to another.
  •  
    « Cache-oblivious algorithms take into account something that has been ignored in all the lectures so far, particularly, the multilevel memory hierarchy of modern computers. Retrieving items from various levels of memory and cache make up a dominant factor of running time, so for speed it is crucial to minimize these costs. The main idea of cache-oblivious algorithms is to achieve optimal use of caches on all levels of a memory hierarchy without knowledge of their size. »
Fabien Cadet

STXXL : Standard Template Library for Extra Large Data Sets - 4 views

  • The key features of STXXL are:
  • Transparent support of parallel disks. The library provides implementations of basic parallel disk algorithms. STXXL is the only external memory algorithm library supporting parallel disks.
  • The library is able to handle problems of very large size (tested to up to dozens of terabytes).
  • ...4 more annotations...
  • Improved utilization of computer resources. STXXL implementations of external memory algorithms and data structures benefit from overlapping of I/O and computation.
  • Small constant factors in I/O volume. A unique library feature called "pipelining" can save more than half the number of I/Os, by streaming data between algorithmic components, instead of temporarily storing them on disk. A development branch supports asynchronous execution of the algorithmic components, enabling high-level task parallelism.
  • Shorter development times due to well known STL-compatible interfaces for external memory algorithms and data structures.
  • For internal computation, parallel algorithms from the MCSTL or the libstdc++ parallel mode are optionally utilized, making the algorithms inherently benefit from multi-core parallelism.
  •  
    « The core of STXXL is an implementation of the C++ standard template library STL for external memory (out-of-core) computations, i. e., STXXL implements containers and algorithms that can process huge volumes of data that only fit on disks. »
Andrey Karpov

Checking OpenCV with PVS-Studio - 0 views

  •  
    OpenCV is a library of computer vision algorithms, picture processing algorithms, and general-purpose numerical algorithms. The library is written in C/C++ and is free both for academic and commercial use, as it is distributed under the BSD license. The time has come to check this library with the PVS-Studio code analyzer. OpenCV is a large library. It contains more than 2500 optimized algorithms and consists of more than 1 million code lines. Cyclomatic complexity of the most complex function cv::cvtColor() is 415. It is no wonder that we have found quite many errors and questionable fragments in its code. However, the size of the source code taken into account, we may call this library a high quality one.
Fabien Cadet

Shtetl-Optimized » Blog Archive » Fighting Hype with Hype - 2 views

  • In the end, they conclude that NP-complete problems are just as hard on an adiabatic quantum computer as on a classical computer. And, since earlier work showed the equivalence between different variants of quantum computers, that pretty much shuts down the possibility of any quantum computer helping with NP-complete problems.
  •  
    [...] quantum computers don't help with NP-complete problems ?
Joel Bennett

The Watchmaker Framework for Evolutionary Computation (evolutionary/genetic algorithms ... - 6 views

  •  
    "The Watchmaker Framework is an extensible, high-performance, object-oriented framework for implementing platform-independent evolutionary/genetic algorithms in Java. "
Matteo Spreafico

Joe Duffy's Weblog - OnBeingStateful - 0 views

  • The biggest question left unanswered in my mind is the role state will play in software of the future.
  • The biggest question left unanswered in my mind is the role state will play in software of the future. That seems like an absurd statement, or a naïve one at the very least.  State is everywhere: The values held in memory. Data locally on disk. Data in-flight that is being sent over a network. Data stored in the cloud, including on a database, remote filesystem, etc. Certainly all of these kinds of state will continue to exist far into the future.  Data is king, and is one major factor that will drive the shift to parallel computing.  The question then is how will concurrent programs interact with this state, read and mutate it, and what isolation and synchronization mechanisms are necessary to do so?
  • Many programs have ample gratuitous dependencies, simply because of the habits we’ve grown accustomed to over 30 odd years of imperative programming.  Our education, mental models, books, best-of-breed algorithms, libraries, and languages all push us in this direction.  We like to scribble intermediary state into shared variables because it’s simple to do so and because it maps to our von Neumann model of how the computer works.
  • ...3 more annotations...
  • We need to get rid of these gratuitous dependencies.  Merely papering over them with a transaction—making them “safe”—doesn’t do anything to improve the natural parallelism that a program contains.  It just ensures it doesn’t crash.  Sure, that’s plenty important, but providing programming models and patterns to eliminate the gratuitous dependencies also achieves the goal of not crashing but with the added benefit of actually improving scalability too.  Transactions have worked so well in enabling automatic parallelism in databases because the basic model itself (without transactions) already implies natural isolation among queries.  Transactions break down and scalability suffers for programs that aren’t architected in this way.  We should learn from the experience of the database community in this regard
  • There will always be hidden mutation of shared state inside lower level system components.  These are often called “benevolent side-effects,” thanks to Hoare, and apply to things like lazy initialization and memorization caches.  These will be done by concurrency ninjas who understand locks.  And their effects will be isolated by convention.
  • Even with all of this support, we’d be left with an ecosystem of libraries like the .NET Framework itself which have been built atop a fundamentally mutable and imperative system.  The path forward here is less clear to me, although having the ability to retain a mutable model within pockets of guaranteed isolation certainly makes me think the libraries are salvageable.  Thankfully, the shift will likely be very gradual, and the pieces that pose substantial problems can be rewritten in place incrementally over time.  But we need the fundamental language and type system support first.
vikramsjn

Programming Proverbs 20: Provide good documentation - Computer Science Teacher - Though... - 0 views

  • the minimum documentation for a piece of code is the list and description of: inputs - names, types and purposes outputs -  types being the most important piece algorithm descriptions
  • The old line is that the job is not done until the paperwork is done. For programming documentation is the paperwork.
  • My axim is write documentation ONLY when it hurts to not have it
1 - 9 of 9
Showing 20 items per page