Skip to main content

Home/ ProgrammingPages/ Group items tagged design

Rss Feed Group items tagged

Navneet Kumar

Joe Gregorio | BitWorking | ETech '07 Summary - Part 2 - MegaData - 0 views

  • the limits you need to put on yourself when storing a billion rows in a database, and they included: no joins, no transactions, no stored procedures, and no triggers.
  • Joshua has similar suggestions from his experience building del.icio.us: no joins, no transactions, no autoincrement
  • BigTable, Google's column-based store with no transactions
    • Navneet Kumar
       
      "Column Based"
  • ...4 more annotations...
  • What's the point in designing tables for a webapp when an RDF-backed store will manage the data for you and RDF queries will come back as tabular data anyway?
  • designing and maintaining yet another relational schema for yet another webapp - doing so is starting to make as much sense as designing my own filesystem or TP monitor.
  • RDF + SPARQL + distributed data sources from around the web?
  • reason that rails and django are so productive; they're highly optimised for domain models. Raw RDF doesn't really do domains like that; you have to expend effort distilling triples into 'things';
  •  
    Database design for huge data. Distributed, joinless, transactionless, de-normalized database
Navneet Kumar

PicoContainer - Home - 0 views

  •  
    PicoContainer is an implementation of lightweight embedable container using dependency injection design pattern. a good example for learning API frameworks and  design patterns.
Navneet Kumar

An Unorthodox Approach to Database Design : The Coming of the Shard | High Scalability - 0 views

  •  
    shard architecture partitions data onto multiple servers, so each server holds a shard of data
Navneet Kumar

Java(TM) Boutique - The Spring Framework - 0 views

  • spring does not impose itself wholly on to the design of a project. Spring is modular and has been divided logically into independent packages, which can function independently.
  • encourages users to introduce Spring into existing applications in a phased manner. So no matter what kind of framework you are using now Spring will co-exist with it without causing you nightmares and further more Spring will allow you to choose specific packages in Spring.
Navneet Kumar

www.dontclick.it - 0 views

shared by Navneet Kumar on 23 Oct 07 - Cached
  •  
    ui design without mouse click
Navneet Kumar

Inversion of Control Containers and the Dependency Injection pattern - 0 views

  • The main difference is that I expect a component to be used locally (think jar file, assembly, dll, or a source > > i mport). A service will be used remotely through some remot >e > i nterface, either synchronous or asynchronous > >
  • configuration through XML files and also through code
    • Navneet Kumar
       
      or can be stored in a DB table
  • dynamic service locator
  • ...16 more annotations...
  • Dependency injection and a service locator aren't necessarily mutually exclusive concepts.
  • With service locator the application class asks for it explicitly by a message to the locator. With injection there is no explicit request, the service appears in the application class - hence the inversion of control.
  • with a Service Locator every user of a service has a dependency to the locator. The locator can hide dependencies to other implementations, but you do need to see the locator.
  • In this kind of scenario I don't see the injector's inversion as providing anything compelling.The difference comes if the lister is a component that I'm providing to an application that other people are writing. In this case I don't know much about the APIs of the service locators that my customers are going to use. Each customer might have their own incompatible service locators
  • Since with an injector you don't have a dependency from a component to the injector, the component cannot obtain further services from the injector once it's been configured.
  • frameworks should minimize their impact upon application code, and particularly should not do things that slow down the edit-execute cycle. Using plugins to substitute heavyweight components does a lot help this process, which is vital for practices such as Test Driven Development.
  • a more general issue with object-oriented programming - should you fill fields in a constructor or with setters.
  • Another advantage with constructor initialization is that it allows you to clearly hide any fields that are immutable by simply not providing a setter. I think this is important - if something shouldn't change then the lack of a setter communicates this very well
  • The problem with classic Factory Methods for components assembly is that they are usually seen as static methods, and you can't have those on interfaces. You can make a factory class, but then that just becomes another service instance. A factory service is often a good tactic, but you still have to instantiate the factory using one of the techniques here
  • Factory Methods
  • start with constructor injection, but be ready to switch to setter injection
  • One case is where you have a simple application that's not got a lot of deployment variation. In this case a bit of code can be clearer than separate XML file.
  • configuration files or code on an API to wire up services
  • the Java world at the moment is a cacophony of configuration files,
  • using aspect oriented ideas with these containers
    • Navneet Kumar
       
      Aspect Oriented Programming AOP. can be possible with IoC.
  • When building application classes the two are roughly equivalent, but I think Service Locator has a slight edge due to its more straightforward behavior. However if you are building classes to used in multiple applications then Dependency Injection is a better choice.
  •  
    article by martin fowler on Inversion of control and dependency injection design patterns which are used in the spring framework.
Navneet Kumar

defmacro - Functional Programming For The Rest of Us - 0 views

  • In a lazy language you have no guarantee that the first line will be executed before the second! This means we can't do IO, can't use native functions in any meaningful way (because they need to be called in order since they depend on side effects), and can't interact with the outside world! If we were to introduce primitives that allow ordered code execution we'd lose the benefits of reasoning about our code mathematically
  • continuations, monads, and uniqueness typing.
  • Alonzo Church developed a formal system called lambda calculus. The system was essentially a programming language for one of these imaginary machines
  • ...18 more annotations...
  • Functions that operate on other functions (accept them as arguments) are called higher order functions.
  • currying is used to reduce the number of arguments
  • only executes code when it's required
  • A lazy compiler thinks of functional code exactly as mathematicians think of an algebra expression - it can cancel things out and completely prevent execution, rearrange pieces of code for higher efficiency, even arrange code in a way that reduces errors, all guaranteeing optimizations won't break the code.
  • John McCarthy (also a Princeton graduate) developed interest in Alonzo Church's work. In 1958 he unveiled a List Processing language (Lisp)
  • Lisp machine - effectively a native hardware implementation of Alonzo's lambda calculus!
  • it was proved that lambda calculus is equivalent to a Turing machine.
  • It turns out that functional programs can keep state, except they don't use variables to do it. They use functions instead. The state is kept in function parameters, on the stack. If you want to keep state for a while and every now and then modify it, you write a recursive function
  • Erlang engineers have been upgrading live systems without stopping them for years.
  • Erlang systems are not scalable and reliable. Java systems are. Erlang systems are simply rock solid
  • Ericsson designed a functional language called Erlang for use in its highly tolerant and scalable telecommunication switches.
  • Continuation Passing Style or CPS
  • A "continuation" is a parameter we may choose to pass to our function that specifies where the function should return.
  • continuations are a generalization of functions.
  • CPS version needs no stack! No function ever "returns" in the traditional sense, it just calls another function with the result instead. We don't need to push function arguments on the stack with every call and then pop them back, we can simply store them in some block of memory and use a jump instruction instead. We'll never need the original arguments - they'll never be used again since no function ever returns!
  • What does the stack contain? Simply the arguments, and a pointer to memory where the function should return. Do you see a light bulb? The stack simply contains continuation information! The pointer to the return instruction in the stack is essentially the same thing as the function to call in CPS programs!
  • A continuation and a pointer to the return instruction in the stack are really the same thing, only a continuation is passed explicitly, so that it doesn't need to be the same place where the function was called from
  • When we get a current continuation and store it somewhere, we end up storing the current state of our program - freezing it in time. This is similar to an OS putting itself into hibernation. A continuation object contains the information necessary to restart the program from the point where the continuation object was acquired.
  •  
    deals from the history of functional languages
    what are functional languages, benefits

Navneet Kumar

MySQL Performance Blog ยป MySQL Query Cache - 0 views

  • It does not cache the plan but full result sets
  • so it looks at first letter of the query and if it is โ€œSโ€ it proceeds with query lookup in cache if not - skips it.
  • Might not work with transactions
  • ...15 more annotations...
  • uses non-deterministic functions such as UUID(), RAND(), CONNECTION_ID() etc it will not be cached.
  • If table gets modification all queries derived from this table are invalidated at once
  • if you have high write application such as forums, query cache efficiency might be pretty low due to this.
  • all queries are removed from cache on table modifications - if there are a lot of queries being cached this might reduce update speed a bit
  • Qcache_free_memory and Qcache_lowmem_prunes
  • number of your selects - Com_select and see how many of them are cached. Query Cache efficiency would be Qcache_hits/(Com_select+Qcache_hits).
  • One portion of query cache overhead is of course inserts so you can see how much of inserted queries are used: Qcache_hits/Qcache_inserts Other portion of overhead comes from modification statements which you can calculate by (Com_insert+Com_delete+Com_update+Com_replace)/Qcache_hits
  • want to set query cache
  • means it is much more efficient as query which required processing millions of rows now can be instantly summoned from query cache
  • It also means query has to be exactly the same and deterministic, so hit rate would generally be less
  • full page caching
  • not using query_cache_wlock_invalidate=ON locking table for write would not invalidate query cache so you can get results evenif table is locked and is being prepared to be updated
  • it can serve responses very fast doing no extra conversion or processing.
  • but it is still not as fast as specially designed systems such as memcached or local shared memory.
  • It is not distributed If you have 10 slaves and use query cache on all of them cache content will likely be the same, so you have multiple copies of the same data in cache effectively wasting memory
  •  
    mysql query caching
Navneet Kumar

When Not to Normalize your SQL Database - SWiK - 0 views

  • With the above design, it takes six SQL Join operations to access and display the information about a single user. This makes rendering the profile page a fairly database intensive operation which is compounded by the fact that profile pages are the most popular pages on social networking sites.
  • Database denormalization is the kind of performance optimization that should be carried out as a last resort after trying things like creating database indexes, using SQL views and implementing application specific in-memory caching. However if you hit massive scale and are dealing with millions of queries a day across hundreds of millions to billions of records or have decided to go with database partitioning/sharding then you will likely end up resorting to denormalization
    • Navneet Kumar
       
      De-Normalization is OK if you are'nt going to update
  • Denormalization means that you you are now likely to deal with data inconsistencies because you are storing redundant copies of data and may not be able to update all copies of a column value simultaneously  when it is changed for a variety of reasons. Having tools in your infrastructure to support fixing up data of this sort then become very important.
  •  
    De-normalizing database to improve speed.
Navneet Kumar

Synchronization and the Java Memory Model - 3 views

    • Navneet Kumar
       
      Assignment to a long, double, float variables are not Atomic.
  • as-if-serial property of these manipulations shields sequential programmers from needing to know if or how they take place. Programmers who never create their own threads are almost never impacted by these issues
  • All changes made in one synchronized method or block are atomic and visible with respect to other synchronized methods and blocks employing the same lock, and processing of synchronized methods or blocks within any given thread is in program-specified order. Even though processing of statements within blocks may be out of order, this cannot matter to other threads employing synchronization.
  • ...6 more annotations...
  • atomicity alone does not guarantee that you will get the value most recently written by any thread. For this reason, atomicity guarantees per se normally have little impact on concurrent program design
  • Thread.start has the same memory effects as a lock release by the thread calling start, followed by a lock acquire by the started thread
  • releasing a lock forces a flush of all writes from working memory employed by the thread, and acquiring a lock forces a (re)load of the values of accessible fields. While lock actions provide exclusion only for the operations performed within a synchronized method or block, these memory effects are defined to cover all fields used by the thread performing the action.
  • If a field is declared as volatile, any value written to it is flushed and made visible by the writer thread before the writer thread performs any further memory operation (i.e., for the purposes at hand it is flushed immediately). Reader threads must reload the values of volatile fields upon each access.
  • you will obtain either its initial value or some value that was written by some thread, but not some jumble of bits resulting from two or more threads both trying to write values at the same time
  • As a thread terminates, all written variables are flushed to main memory.
1 - 11 of 11
Showing 20 items per page