Task Queue Python API Overview - Google App Engine - Google Code - 0 views
-
With the Task Queue API, applications can perform work outside of a user request but initiated by a user request. If an app needs to execute some background work, it may use the Task Queue API to organize that work into small, discrete units, called Tasks. The app then inserts these Tasks into one or more Queues. App Engine automatically detects new Tasks and executes them when system resources permit.
Google App Engine Java Experiments - 0 views
Google App Engine Task Queue on GWT - Stack Overflow - 0 views
-
Yes, worker would be a servlet which can handle a request with POST parameters. If you want an asynchronous call from client's point of view then RPC is enough (from server's point of view it is still synchronous). If you want to do "delayed" jobs which don't talk to your client, you can use a task queue.
-
Instead of trying to create a thread (which is impossible in App Engine), this is a great way to asynchronously run tasks.
Unowned relationship confusion - Google App Engine for Java | Google Groups - 0 views
-
You will only find "unowned relationships" terminology in Google docs. This "unowned relationships" in the GAE/J docs is where you have a field that is a "Key" or a Collection/Map/array of keys. So no *real* relation, just some implicit relation by that "key". You have to manage these keys yourself, and you tie your code to GAE/J by using them.
Selecting based on __key__ (a unique identifier) in google appengine - Stack Overflow - 0 views
-
Don't bother with GQL for key-based retrieval -- make a key object from the string: k = db.Key('aght52oobW1hZHIOCxIHTWVzc2FnZRiyAQw') and just db.get(k). If you insist on GQL, btw, that k -- a suitably constructed instance of db.Key, NOT a string object!-) -- is also what you need to substitute into the GQL query (by :1 or whatrever).
-
keys are SERIOUSLY unique -- from a key you can get the entity's .app() (so uniqueness across apps is a given), .kind() (so uniqueness WITHIN the app's no problem either), etc.
Background work with the deferred library - Google App Engine - Google Code - 0 views
-
Thanks to the Task Queue API released in SDK 1.2.3, it's easier than ever to do work 'offline', separate from user serving requests. In some cases, however, setting up a handler for each distinct task you want to run can be cumbersome, as can serializing and deserializing complex arguments for the task - particularly if you have many diverse but small tasks that you want to run on the queue. Fortunately, a new library in release 1.2.5 of the SDK makes these ad-hoc tasks much easier to write and execute. This library is found in google.appengine.ext.deferred, and from here on in we'll refer to it as the 'deferred' library. The deferred library lets you bypass all the work of setting up dedicated task handlers and serializing and deserializing your parameters by exposing a simple function, deferred.defer().
-
To demonstrate how powerful the deferred library can be, we're going to reprise an example from the remote_api article - the Mapper class. Like the example in the remote_api article, this class will make it easy to iterate over a large set of entities, making changes or calculating totals. Unlike the remote_api version, though, our version won't require an external computer to run it on, and it'll be more efficient to boot!
-
Task Queue items are limited to 10kb of associated data. This means that when the deferred library serializes the details of your call, it must amount to less than 10 kilobytes in order to fit on the Task Queue directly. No need to panic, though: If you try to enqueue a task that is too big to fit on the queue by itself, the deferred library will automatically create a new Entity in the datastore to hold information about the task, and will delete the entity once the task has been run. This means that in practice, your function call can be up to 1MB once serialized.
Accessing the datastore remotely with remote_api - Google App Engine - Google Code - 0 views
-
The remote_api module consists of two parts: A 'handler', which you install on the server to handle remote datastore requests, and a 'stub', which you set up on the client to translate datastore requests into calls to the remote handler. remote_api works at the lowest level of the datastore, so once you've set up the stub, you don't have to worry about the fact that you're operating on a remote datastore: With a few caveats, it works exactly the same as if you were accessing the datastore directly.
-
Note that the handler specifies "login: admin". This is extremely important, since we don't want to give just anyone unfettered access to our datastore!
-
Since you're accessing the datastore over HTTP, there's a bit more overhead and latency than when you access it locally. In order to speed things up and decrease load, try to limit the number of round-trips you do by batching gets and puts, and fetching batches of entities from queries. This is good advice not just for remote_api, but for using the datastore in general, since a batch operation is only considered to be a single Datastore operation
- ...1 more annotation...
Delete all from datastore - Google App Engine for Java | Google Groups - 0 views
-
All you can do is fetch the entities of the type you want from the datastore and delete them one by one. (call datastore.delete(Iterable<Entity>))This way you can delete about 1000 entities at each request... There is no way to delete all entities of a Kind by a single api call.
-
> > If there is a way to list current Kinds or select all entities of any > > kind I could manage this.
How to delete all entities of a kind with the datastore viewer - Google App Engine for ... - 0 views
-
One thing you get used to on appengine is that any bulk data work requires the task queue. You can use a little bit of framework and make all of these transforms (including deleting data) a question of just writing a simple task class and firing it off. You'll want a copy of the Deferred servlet: http://code.google.com/p/gaevfs/source/browse/trunk/src/com/newatlant... Fair warning: I found that I needed to change the code to make it perform base64 encoding all the time, not just on the dev instance.
« First
‹ Previous
321 - 340 of 592
Next ›
Last »
Showing 20▼ items per page