Skip to main content

Home/ Groups/ ProgrammingPages
1More

Unusual software bug - Wikipedia, the free encyclopedia - 0 views

  •  
    heisenbug, bohrbug, mandelbug, schroedinbug
1More

An Unorthodox Approach to Database Design : The Coming of the Shard | High Scalability - 0 views

  •  
    shard architecture partitions data onto multiple servers, so each server holds a shard of data
1More

Folksonomy - Wikipedia, the free encyclopedia - 0 views

  •  
    A folksonomy is the practice and method of collaborative categorization using freely-chosen keywords called tags
9More

Joe Gregorio | BitWorking | ETech '07 Summary - Part 2 - MegaData - 0 views

  • the limits you need to put on yourself when storing a billion rows in a database, and they included: no joins, no transactions, no stored procedures, and no triggers.
  • Joshua has similar suggestions from his experience building del.icio.us: no joins, no transactions, no autoincrement
  • BigTable, Google's column-based store with no transactions
    • Navneet Kumar
       
      "Column Based"
  • ...4 more annotations...
  • What's the point in designing tables for a webapp when an RDF-backed store will manage the data for you and RDF queries will come back as tabular data anyway?
  • designing and maintaining yet another relational schema for yet another webapp - doing so is starting to make as much sense as designing my own filesystem or TP monitor.
  • RDF + SPARQL + distributed data sources from around the web?
  • reason that rails and django are so productive; they're highly optimised for domain models. Raw RDF doesn't really do domains like that; you have to expend effort distilling triples into 'things';
  •  
    Database design for huge data. Distributed, joinless, transactionless, de-normalized database
1More

PatHelland's WebLog : Normalization Is for Sissies - 0 views

  •  
    de-normalization is ok if you are not going to update
5More

When Not to Normalize your SQL Database - SWiK - 0 views

  • With the above design, it takes six SQL Join operations to access and display the information about a single user. This makes rendering the profile page a fairly database intensive operation which is compounded by the fact that profile pages are the most popular pages on social networking sites.
  • Database denormalization is the kind of performance optimization that should be carried out as a last resort after trying things like creating database indexes, using SQL views and implementing application specific in-memory caching. However if you hit massive scale and are dealing with millions of queries a day across hundreds of millions to billions of records or have decided to go with database partitioning/sharding then you will likely end up resorting to denormalization
    • Navneet Kumar
       
      De-Normalization is OK if you are'nt going to update
  • Denormalization means that you you are now likely to deal with data inconsistencies because you are storing redundant copies of data and may not be able to update all copies of a column value simultaneously  when it is changed for a variety of reasons. Having tools in your infrastructure to support fixing up data of this sort then become very important.
  •  
    De-normalizing database to improve speed.
1More

What about Sun embracing JavaScript? - 0 views

  • There's a lot of work being done by the Mozilla foundation and Adobe to integrate a killer JavaScript interpreter into both Flash and Firefox using Adobe's Tamarin engine
1More

SQL Server 7.0: System Error Messages - 0 views

  •  
    All ms-sql system error messages
1More

SQL Server 7.0: Resolving System Error Messages - 0 views

  •  
    some ms-sql error-code  descriptions
1More

MySQL AB :: MySQL 3.23, 4.0, 4.1 Reference Manual :: A.2 Server Error Codes and Messages - 1 views

  •  
    appendix of mysql error codes and their  messages
« First ‹ Previous 41 - 60 of 97 Next › Last »
Showing 20 items per page