Skip to main content

Home/ ProgrammingPages/ Group items tagged normalization

Rss Feed Group items tagged

Navneet Kumar

When Not to Normalize your SQL Database - SWiK - 0 views

  • With the above design, it takes six SQL Join operations to access and display the information about a single user. This makes rendering the profile page a fairly database intensive operation which is compounded by the fact that profile pages are the most popular pages on social networking sites.
  • Database denormalization is the kind of performance optimization that should be carried out as a last resort after trying things like creating database indexes, using SQL views and implementing application specific in-memory caching. However if you hit massive scale and are dealing with millions of queries a day across hundreds of millions to billions of records or have decided to go with database partitioning/sharding then you will likely end up resorting to denormalization
    • Navneet Kumar
       
      De-Normalization is OK if you are'nt going to update
  • Denormalization means that you you are now likely to deal with data inconsistencies because you are storing redundant copies of data and may not be able to update all copies of a column value simultaneously  when it is changed for a variety of reasons. Having tools in your infrastructure to support fixing up data of this sort then become very important.
  •  
    De-normalizing database to improve speed.
Navneet Kumar

MySQL AB :: An Introduction to Database Normalization - 0 views

  •  
    intro to db normalization
Navneet Kumar

PatHelland's WebLog : Normalization Is for Sissies - 0 views

  •  
    de-normalization is ok if you are not going to update
Navneet Kumar

Joe Gregorio | BitWorking | ETech '07 Summary - Part 2 - MegaData - 0 views

  • the limits you need to put on yourself when storing a billion rows in a database, and they included: no joins, no transactions, no stored procedures, and no triggers.
  • Joshua has similar suggestions from his experience building del.icio.us: no joins, no transactions, no autoincrement
  • BigTable, Google's column-based store with no transactions
    • Navneet Kumar
       
      "Column Based"
  • ...4 more annotations...
  • What's the point in designing tables for a webapp when an RDF-backed store will manage the data for you and RDF queries will come back as tabular data anyway?
  • designing and maintaining yet another relational schema for yet another webapp - doing so is starting to make as much sense as designing my own filesystem or TP monitor.
  • RDF + SPARQL + distributed data sources from around the web?
  • reason that rails and django are so productive; they're highly optimised for domain models. Raw RDF doesn't really do domains like that; you have to expend effort distilling triples into 'things';
  •  
    Database design for huge data. Distributed, joinless, transactionless, de-normalized database
Navneet Kumar

Synchronization and the Java Memory Model - 3 views

    • Navneet Kumar
       
      Assignment to a long, double, float variables are not Atomic.
  • as-if-serial property of these manipulations shields sequential programmers from needing to know if or how they take place. Programmers who never create their own threads are almost never impacted by these issues
  • All changes made in one synchronized method or block are atomic and visible with respect to other synchronized methods and blocks employing the same lock, and processing of synchronized methods or blocks within any given thread is in program-specified order. Even though processing of statements within blocks may be out of order, this cannot matter to other threads employing synchronization.
  • ...6 more annotations...
  • atomicity alone does not guarantee that you will get the value most recently written by any thread. For this reason, atomicity guarantees per se normally have little impact on concurrent program design
  • Thread.start has the same memory effects as a lock release by the thread calling start, followed by a lock acquire by the started thread
  • releasing a lock forces a flush of all writes from working memory employed by the thread, and acquiring a lock forces a (re)load of the values of accessible fields. While lock actions provide exclusion only for the operations performed within a synchronized method or block, these memory effects are defined to cover all fields used by the thread performing the action.
  • If a field is declared as volatile, any value written to it is flushed and made visible by the writer thread before the writer thread performs any further memory operation (i.e., for the purposes at hand it is flushed immediately). Reader threads must reload the values of volatile fields upon each access.
  • you will obtain either its initial value or some value that was written by some thread, but not some jumble of bits resulting from two or more threads both trying to write values at the same time
  • As a thread terminates, all written variables are flushed to main memory.
1 - 5 of 5
Showing 20 items per page