Class
GqlQuery
represents a query for retrieving entities from
the App Engine Datastore using the SQL-like App Engine query language, GQL. For
a complete discussion of GQL syntax and features, see the
GQL Reference
;
see also the related class
Query
,
which uses objects and methods, rather than GQL, to prepare queries.
GqlQuery
is defined in the module
google.appengine.ext.db
.
Note: The index-based query mechanism supports a wide range of queries and is suitable for most applications. However, it does not support some kinds of query common in other database technologies: in particular, joins and aggregate queries aren't supported within the Datastore query engine. See the Datastore Queries page for limitations on Datastore queries.
Contents
Introduction
An application creates a GQL query object by calling either the
GqlQuery
constructor
directly or the class method
gql()
of an entity kind's model class. The
GqlQuery
constructor takes as
an argument a
query string,
a complete GQL statement beginning with
SELECT
...
FROM
model-name
. Values in
WHERE
clauses can be
numeric or string literals, or can use parameter binding for values. Parameters
can be bound using either positional or keyword arguments:
q = GqlQuery("SELECT * FROM Song WHERE composer = 'Lennon, John'") q = GqlQuery("SELECT __key__ FROM Song WHERE composer = :1", "Lennon, John") q = GqlQuery("SELECT * FROM Song WHERE composer = :composer", composer="Lennon, John")
For convenience, the
Model
and
Expando
classes have a class method
gql()
that returns a
GqlQuery
instance. This method takes a GQL query
string without the
SELECT
...
FROM
model-name
prefix, which is implied:
q = Song.gql("WHERE composer = 'Lennon, John'")
The application can then execute the query and access the results in any of the following ways:
-
Treat the query object as an iterable, to process matching entities one at a time:
for song in q: print song.title
This implicitly calls the query's
run()
method to generate the matching entities. It is thus equivalent tofor song in q.run(): print song.title
You can set a limit on the number of results to process with the keyword argument
limit
:for song in q.run(limit=5): print song.title
The iterator interface does not cache results, so creating a new iterator from the query object reiterates the same query from the beginning.
-
Call the query's
get()
method, to obtain the first single matching entity found in the Datastore:result = q.get() print song.title
-
Call the query's
fetch()
method, to obtain a list of all matching entities up to a specified number of results:results = q.fetch(limit=5) for song in results: print song.title
As with
run()
, the query object does not cache results, so callingfetch()
a second time reissues the same query.Note: You should rarely need to use this method; it is almost always better to use
run()
instead.
Constructor
The constructor for class
GqlQuery
is defined as follows:
- class GqlQuery ( query_string , *args , **kwds )
-
Creates an instance of class
GqlQuery
for retrieving entities from the App Engine Datastore using the GQL query language.Arguments
- query_string
- String containing a complete GQL statement.
- args
- Positional parameter values.
- kwds
- Keyword parameter values.
Instance Methods
Instances of class
GqlQuery
have the following methods:
- bind ( *args , **kwds )
-
Rebinds the query's parameter values. The modified query will be executed the first time results are accessed after its parameters have been rebound.
Rebinding parameters to an existing
GqlQuery
object is faster than building a newGqlQuery
object, because the query string doesn't need to be parsed again.Arguments
- args
- New positional parameter values.
- kwds
- New keyword parameter values.
- projection ()
-
Returns the tuple of properties in the projection or
None
. - is_keys_only ()
-
Returns a boolean value indicating whether the query is a keys-only query.
- run ( read_policy = STRONG_CONSISTENCY , deadline = 60 , offset = 0 , limit = None , batch_size = 20 , keys_only = False , projection = None , start_cursor = None , end_cursor = None ,
-
Returns an iterable for looping over the results of the query. This allows you to specify the query's operation with parameter settings and access the results iteratively:
-
Retrieves and discards the number of results specified by the
offset
argument. -
Retrieves and returns up to the maximum number of results specified by the
limit
argument.
The loop's performance thus scales linearly with the sum of
offset
+limit
. If you know how many results you want to retrieve, you should always set an explicitlimit
value.This method uses asynchronous prefetching to improve performance. By default, it retrieves its results from the Datastore in small batches, allowing the application to stop the iteration and avoid retrieving more results than are needed.
Tip: To retrieve all available results when their number is unknown, set
batch_size
to a large value, such as1000
.Tip: If you don't need to change the default argument values, you can just use the query object directly as an iterable to control the loop. This implicitly calls
run()
with default arguments.Arguments
- read_policy
-
Read policy specifying desired level of data consistency:
- STRONG_CONSISTENCY
- Guarantees the freshest results, but limited to a single entity group .
- EVENTUAL_CONSISTENCY
- Can span multiple entity groups, but may occasionally return stale results. In general, eventually consistent queries run faster than strongly consistent queries, but there is no guarantee.
Note: Global (non-ancestor) queries ignore this argument.
- deadline
- Maximum time, in seconds, to wait for Datastore to return a result before aborting with an error. Accepts either an integer or a floating-point value. Cannot be set higher than the default value (60 seconds), but can be adjusted downward to ensure that a particular operation fails quickly (for instance, to return a faster response to the user, retry the operation, try a different operation, or add the operation to a task queue).
- offset
- Number of results to skip before returning the first one.
- limit
-
Maximum number of results to return.
If this parameter is omitted, the value specified in the
LIMIT
clause of the GQL query string will be used. If explicitly set toNone
, all available results will be retrieved. - batch_size
-
Number of results to attempt to retrieve per batch. If
limit
is set, defaults to the specified limit; otherwise defaults to20
. - keys_only
-
If
true
, return only keys instead of complete entities. Keys-only queries are faster and cheaper than those that return complete entities. - projection
-
List or tuple of names of properties to return. Only entities possessing the specified properties will be returned. If not specified, entire entities are returned by default.
Projection queries
are faster and cheaper than those that return complete entitites.
Note: Specifying this parameter may change the query's index requirements.
- start_cursor
- Cursor position at which to start query.
- end_cursor
- Cursor position at which to end query.
-
Retrieves and discards the number of results specified by the
- get ( read_policy = STRONG_CONSISTENCY , deadline = 60 , offset = 0 , keys_only = False , projection = None , start_cursor = None , end_cursor = None ,
-
Executes the query and returns the first result, or
None
if no results are found. At most one result is retrieved from the Datastore; theLIMIT
clause of the GQL query string, if any, is ignored.Arguments
- read_policy
-
Read policy specifying desired level of data consistency:
- STRONG_CONSISTENCY
- Guarantees the freshest results, but limited to a single entity group .
- EVENTUAL_CONSISTENCY
- Can span multiple entity groups, but may occasionally return stale results. In general, eventually consistent queries run faster than strongly consistent queries, but there is no guarantee.
Note: Global (non-ancestor) queries ignore this argument.
- deadline
- Maximum time, in seconds, to wait for Datastore to return a result before aborting with an error. Accepts either an integer or a floating-point value. Cannot be set higher than the default value (60 seconds), but can be adjusted downward to ensure that a particular operation fails quickly (for instance, to return a faster response to the user, retry the operation, try a different operation, or add the operation to a task queue).
- offset
- Number of results to skip before returning the first one.
- keys_only
-
If
true
, return only keys instead of complete entities. Keys-only queries are faster and cheaper than those that return complete entities. - projection
-
List or tuple of names of properties to return. Only entities possessing the specified properties will be returned. If not specified, entire entities are returned by default.
Projection queries
are faster and cheaper than those that return complete entitites.
Note: Specifying this parameter may change the query's index requirements.
- start_cursor
- Cursor position at which to start query.
- end_cursor
- Cursor position at which to end query.
- fetch ( limit , read_policy = STRONG_CONSISTENCY , deadline = 60 , offset = 0 , keys_only = False , projection = None , start_cursor = None , end_cursor = None ,
-
Executes the query and returns a (possibly empty) list of results:
-
Retrieves and discards the number of results specified by the
offset
argument. -
Retrieves and returns up to the maximum number of results specified by the
limit
argument.
The method's performance thus scales linearly with the sum of
offset
+limit
.Note: This method is merely a thin wrapper around the
run()
method, and is less efficient and more memory-intensive than usingrun()
directly. You should rarely need to usefetch()
; it is provided mainly for convenience in cases where you need to retrieve a full in-memory list of query results.Tip: To retrieve all available results of a query when their number is unknown, use
run()
with a large batch size, such asrun(batch_size=1000)
, instead offetch()
.Arguments
- limit
-
Maximum number of results to return.
If set to
None
, all available results will be retrieved. - read_policy
-
Read policy specifying desired level of data consistency:
- STRONG_CONSISTENCY
- Guarantees the freshest results, but limited to a single entity group .
- EVENTUAL_CONSISTENCY
- Can span multiple entity groups, but may occasionally return stale results. In general, eventually consistent queries run faster than strongly consistent queries, but there is no guarantee.
Note: Global (non-ancestor) queries ignore this argument.
- deadline
- Maximum time, in seconds, to wait for Datastore to return a result before aborting with an error. Accepts either an integer or a floating-point value. Cannot be set higher than the default value (60 seconds), but can be adjusted downward to ensure that a particular operation fails quickly (for instance, to return a faster response to the user, retry the operation, try a different operation, or add the operation to a task queue).
- offset
- Number of results to skip before returning the first one.
- keys_only
-
If
true
, return only keys instead of complete entities. Keys-only queries are faster and cheaper than those that return complete entities. - projection
-
List or tuple of names of properties to return. Only entities possessing the specified properties will be returned. If not specified, entire entities are returned by default.
Projection queries
are faster and cheaper than those that return complete entitites.
Note: Specifying this parameter may change the query's index requirements.
- start_cursor
- Cursor position at which to start query.
- end_cursor
- Cursor position at which to end query.
-
Retrieves and discards the number of results specified by the
- count ( read_policy = STRONG_CONSISTENCY , deadline = 60 , offset = 0 , limit = 1000 , start_cursor = None , end_cursor = None )
-
Returns the number of results matching the query. This is faster by a constant factor than actually retrieving all of the results, but the running time still scales linearly with the sum of
offset
+limit
. Unless the result count is expected to be small, it is best to specify alimit
argument; otherwise the method will continue until it finishes counting or times out .Arguments
- read_policy
-
Read policy specifying desired level of data consistency:
- STRONG_CONSISTENCY
- Guarantees the freshest results, but limited to a single entity group .
- EVENTUAL_CONSISTENCY
- Can span multiple entity groups, but may occasionally return stale results. In general, eventually consistent queries run faster than strongly consistent queries, but there is no guarantee.
Note: Global (non-ancestor) queries ignore this argument.
- deadline
- Maximum time, in seconds, to wait for Datastore to return a result before aborting with an error. Accepts either an integer or a floating-point value. Cannot be set higher than the default value (60 seconds), but can be adjusted downward to ensure that a particular operation fails quickly (for instance, to return a faster response to the user, retry the operation, try a different operation, or add the operation to a task queue).
- offset
- Number of results to skip before counting the first one.
- limit
-
Maximum number of results to count.
Note: If specified explicitly, this parameter overrides any value set in the
LIMIT
clause of the GQL query string. However, if the parameter is omitted, the default value of1000
does not override the GQL query'sLIMIT
clause and applies only if noLIMIT
clause has been specified. - start_cursor
- Cursor position at which to start query.
- end_cursor
- Cursor position at which to end query.
- index_list ()
-
Returns a list of indexes used by an executed query, including primary, composite, kind, and single-property indexes.
Caution: Invoking this method on a query that has not yet been executed will raise an
AssertionError
exception.Note: This feature is not fully supported on the development server. When used with the development server, the result is either the empty list or a list containing exactly one composite index.
For example, the following code prints various information about the indexes used by a query:
# other imports ... import webapp2 from google.appengine.api import users from google.appengine.ext import db class Greeting(db.Model): author = db.StringProperty() content = db.StringProperty(multiline=True) date = db.DateTimeProperty(auto_now_add=True) class MainPage(webapp2.RequestHandler): def get(self): user = users.get_current_user() q = db.GqlQuery(Greeting) q.filter("author =", user.user_id()) q.order("-date") q.fetch(100) index_list = q.index_list() for ix in index_list: self.response.out.write("Kind: %s" % ix.kind()) self.response.out.write("<br />") self.response.out.write("Has ancestor? %s" % ix.has_ancestor()) self.response.out.write("<br />") for name, direction in ix.properties(): self.response.out.write("Property name: "+name) self.response.out.write("<br />") if direction == db.Index.DESCENDING: self.response.out.write("Sort direction: DESCENDING") else: self.response.out.write("Sort direction: ASCENDING") self.response.out.write("<br />")
This produces output like the following for each index:
Kind: Greeting Has ancestor? False Property name: author Sort direction: ASCENDING Property name: date Sort direction: DESCENDING
- cursor ()
-
Returns a base64-encoded cursor string denoting the position in the query's result set following the last result retrieved. The cursor string is safe to use in HTTP
GET
andPOST
parameters, and can also be stored in the Datastore or Memcache. A future invocation of the same query can provide this string via thestart_cursor
parameter or thewith_cursor()
method to resume retrieving results from this position.Caution: Invoking this method on a query that has not yet been executed will raise an
AssertionError
exception.Note: Not all queries are compatible with cursors; see the Datastore Queries page for more information.
- with_cursor ( start_cursor , end_cursor = None )
-
Specifies the starting and (optionally) ending positions within a query's result set from which to retrieve results. The cursor strings denoting the starting and ending positions can be obtained by calling
cursor()
after a previous invocation of the query. The current query must be identical to that earlier invocation, including the entity kind, property filters, ancestor filters, and sort orders.Caution: Invoking this method on a query that has not yet been executed will raise an
AssertionError
exception.Arguments
- start_cursor
- Base64-encoded cursor string specifying where to start the query.
- end_cursor
- Base64-encoded cursor string specifying where to end the query.