The Windows Azure TableBrowser is a web based application that gives you the ability to browse your Windows Azure Storage tables and create, edit, delete, and copy entities.
Standard fare for most dynamic data and the way most everybody would tell you to do it.
Only thing is that it scales like a dog.
The thing is that holding all the weather of the entire globe in memory, well, takes a lot of memory. More than is reasonable. In which case, there’s a fairly decent chance that a given request can’t be served from the cache, resulting in a query to the database, an update to the cache, which bumps out something else, in short, not a very good hit rate.
If we were able to make our clients in London perform an HTTP GET on http://weather.myclient.com/UK/London then we could return headers in the HTTP response telling the intermediaries that they can cache the response for an hour, or however long we want.
Instead of getting hammered by millions of requests a day, the internet would shoulder easily 90% of that load making it much easier to scale. Thanks Al.
Expression Web SuperPreview for Internet Explorer shows your web pages rendered in Internet Explorer 6 and either Internet Explorer 7 or Internet Explorer 8, depending on which version you have installed on your machine. You can view the pages side by side or as an onion-skin overlay and use rulers, guides and zoom/pan tools to precisely identify differences in layout. You can even compare your page comp to how the targeted browsers render the page.
Expression Web SuperPreview for Internet Explorer is a standalone, free application with no expiration and no technical support from Microsoft.
LINQPad lets you interactively query SQL databases
in a modern query language: LINQ. Kiss goodbye to SQL Management
Studio!
it's a highly ergonomic code snippet IDE
that instantly executes any C#/VB expression, statement block or program – the ultimate in dynamic development.
Best of all, LINQPad standard edition is free and can run without installation
(or with a low-impact setup)
Almost every article I see that describes the difference between value types and reference types explains in (frequently incorrect) detail about what “the stack” is and how the major difference between value types and reference types is that value types go on the stack.
I find this characterization of a value type based on its implementation details rather than its observable characteristics to be both confusing and unfortunate. Surely the most relevant fact about value types is not the implementation detail of how they are allocated, but rather the by-design semantic meaning of “value type”, namely that they are always copied “by value”.
Of course, the simplistic statement I described is not even true. As the MSDN documentation correctly notes, value types are allocated on the stack sometimes. For example, the memory for an integer field in a class type is part of the class instance’s memory, which is allocated on the heap.
As long as the implementation maintains the semantics guaranteed by the specification, it can choose any strategy it likes for generating efficient code
That Windows typically does so, and that this one-meg array is an efficient place to store small amounts of short-lived data is great, but it’s not a requirement that an operating system provide such a structure, or that the jitter use it. The jitter could choose to put every local “on the heap” and live with the performance cost of doing so, as long as the value type semantics were maintained
I would only be making that choice if profiling data showed that there was a large, real-world-customer-impacting performance problem directly mitigated by using value types. Absent such data, I’d always make the choice of value type vs reference type based on whether the type is semantically representing a value or semantically a reference to something.
function ZParenizor2(value) {
var that = new Parenizor(value);
that.toString = function () {
if (this.getValue()) {
return this.uber('toString');
}
return "-0-"
};
return that;
}
Again, we augment Function. We make an instance of the
parent class and use it as the new prototype. We also
correct the constructor field, and we add the uber method to
the prototype as well.
This adds a public method to the Function.prototype, so all
functions get it by Class Augmentation. It takes a name and a function, and
adds them to a function's prototype object.
To make the examples above work, I wrote four sugar
methods. First, the method method, which adds an instance method to
a class.
Function.prototype.method = function (name, func) {
this.prototype[name] = func;
return this;
};
JavaScript can be used like a classical language, but it also has a level of
expressiveness which is quite unique. We have looked at Classical Inheritance,
Swiss Inheritance, Parasitic Inheritance, Class Augmentation, and Object Augmentation.
This large set of code reuse patterns comes from a language which is considered
smaller and simpler than Java.
I have been writing JavaScript
for 8 years now, and I have never once found need to use an uber
function. The super idea is fairly important in the classical
pattern, but it appears to be unnecessary in the prototypal and functional
patterns. I now see my early attempts to support the classical model in
JavaScript as a mistake.
The biggest question left unanswered in my mind is the role state will play in software of the future.
The biggest question left unanswered in my mind is the role state will play in software of the future.
That seems like an absurd statement, or a naïve one at the very least. State is everywhere:
The values held in memory.
Data locally on disk.
Data in-flight that is being sent over a network.
Data stored in the cloud, including on a database, remote filesystem, etc.
Certainly all of these kinds of state will continue to exist far into the future. Data is king, and is one major factor that will drive the shift to parallel computing. The question then is how will concurrent programs interact with this state, read and mutate it, and what isolation and synchronization mechanisms are necessary to do so?
Many programs have ample gratuitous dependencies, simply because of the habits we’ve grown accustomed to over 30 odd years of imperative programming. Our education, mental models, books, best-of-breed algorithms, libraries, and languages all push us in this direction. We like to scribble intermediary state into shared variables because it’s simple to do so and because it maps to our von Neumann model of how the computer works.
We need to get rid of these gratuitous dependencies. Merely papering over them with a transaction—making them “safe”—doesn’t do anything to improve the natural parallelism that a program contains. It just ensures it doesn’t crash. Sure, that’s plenty important, but providing programming models and patterns to eliminate the gratuitous dependencies also achieves the goal of not crashing but with the added benefit of actually improving scalability too. Transactions have worked so well in enabling automatic parallelism in databases because the basic model itself (without transactions) already implies natural isolation among queries. Transactions break down and scalability suffers for programs that aren’t architected in this way. We should learn from the experience of the database community in this regard
There will always be hidden mutation of shared state inside lower level system components. These are often called “benevolent side-effects,” thanks to Hoare, and apply to things like lazy initialization and memorization caches. These will be done by concurrency ninjas who understand locks. And their effects will be isolated by convention.
Even with all of this support, we’d be left with an ecosystem of libraries like the .NET Framework itself which have been built atop a fundamentally mutable and imperative system. The path forward here is less clear to me, although having the ability to retain a mutable model within pockets of guaranteed isolation certainly makes me think the libraries are salvageable. Thankfully, the shift will likely be very gradual, and the pieces that pose substantial problems can be rewritten in place incrementally over time. But we need the fundamental language and type system support first.