inner class Builder is in charge of creating Customer instances
mandatory fields – either primitive (e.g. id) or annotated with @NotNull (e.g. lastName) – are part of the builder's constructor
all optional fields setter methods on the builder are provided
newly created Customer instance is validated using the Validator#validate() method
impossible to retrieve an invalid Customer instance
extract the validation routine into a base class:
abstract class AbstractBuilder<T>
T build() throws ConstraintViolationException
protected abstract T buildInternal();
private static Validator validator
Concrete builder classes have to
extend AbstractBuilder
must implement the buildInternal() method:
Builder extends AbstractBuilder<Customer>
@Override
protected Customer buildInternal()
Implementing the Builder Pattern using the Bean Validation API
variation of the Builder design pattern for instantiating objects with multiple optional attributes.
this pattern frees you from providing multiple constructors with the different optional attributes as parameters (hard to maintain and hard to read for clients)
or providing setter methods for the optional attributes
(require objects to be mutable, can leave objects in inconsistent state)
but don’t know how those dependencies are instantiated
And you shouldn’t really care, all that is important is that UserService depends on dao and webservice object.
BDD template given-when-then) tests are easy to read
@Entity
public class User
calling new User(“someName”,”somePassowrd”, “someOtherName”, “someOtherPassword”) becomes hardly readable and maintainable
code duplication
Maintaining this code would turn into a nightmare in no time
running the code above will throw an exception by the JPA provider,
since not-nullable password field was never set.
Joshua Blooch gives fine example of builder pattern.
Instead of making the desired object directly, the client calls a constructor (or static factory) with all of the required parameters and gets a builder object. Then the client calls setter-like methods on the builder object to set each optional parameter of interest. Finally, the client calls a parameterless build method to generate the object, which is immutable. The builder is a static member class of the class it builds.
Coffee
public static class Builder
Builder(CoffeeType type, int cupSize)
Builder withMilk()
Coffee build()
Coffee(this)
private Coffee(Builder builder)
Coffee coffee = new Coffee.Builder(CoffeeType.Expresso, 3).withMilk().build();2}
especially if most of those parameters are optional.
For all entity attributes I create private fields
those that are obligatory become parameters for the public constructor
parameter-less constructor, I create one, but I give him
map multiple properties as @Id
properties and declare an external class to be the identifier
type
declared on the entity via
the @IdClass annotation
The identifier
type must contain the same properties as the identifier properties
of the entity: each property name must be the same, its type must
be the same as well if the entity property is of a
basic type
last case is far from obvious
recommend you not to use it (for simplicity sake)
@EmbeddedId property
@EmbeddedId
@Embeddable
@EmbeddedId
@Embeddable
@Embeddable
@EmbeddedId
Multiple @Id properties
arguably more natural, approach
place
@Id on multiple properties of my entity
only supported by Hibernate
does not require an
extra embeddable component.
@IdClass
@IdClass on an entity points to the
class (component) representing the identifier of the class
WarningThis approach is inherited from the EJB 2 days and we
recommend against its use. But, after all it's your application
and Hibernate supports it.
Mapping entity associations/relationships
One-to-one
three cases for
one-to-one associations:
associated entities share the same
primary keys values
foreign key is held by one of the entities
(note that this FK column in the database should be constrained unique
to simulate one-to-one multiplicity)
association table is used
to store the link between the 2 entities (a unique constraint has to
be defined on each fk to ensure the one to one multiplicity)
@PrimaryKeyJoinColumn
shared primary
keys:
explicit foreign key column:
@JoinColumn(name="passport_fk")
foreign key column named
passport_fk in the Customer
table
may be bidirectional
owner is responsible for the association column(s) update
In a bidirectional
relationship, one of the sides (and only one) has to be the owner
To declare
a side as
not responsible for the relationship
the attribute
mappedBy
is used
mappedBy
Indexed collections (List, Map)
Lists can be mapped in two different ways:
as ordered lists
as indexed lists
@OrderBy("number")
List<Order>
List<Order>
List<Order>
To use one of the target entity property as a key of the map,
use
history is persisted in the database
by means of a JPA entity bean and those objects are serialized back to the Flex client each time you enter a new name.
All entities marked as [Managed] are considered as corresponding to Hibernate/JPA managed entities on the server
It is highly recommended to use JPA optimistic locking in a multi-tier environment (@Version annotation
In conclusion, the recommended approach to avoid any kind of subtle problems is to have a real uid property which will be persisted
in the database
but is not a primary key for efficiency concerns
Here all loaded collections of the Person object will be uninitialized so uperson contains only the minimum of data
to correctly merge your changes in the server persistence context
Tide uses the client data tracking (the same used for dirty checking, see below) to determine which parts of the graph
need to be sent.
Dirty Checking and Conflict Handling
Data Validation
Tide integrates with Hibernate Validator 3.x and the Bean Validation API (JSR 303) implementations, and propagate the server validation errors to the client
UI components
Entity–attribute–value model (EAV) is a data model to describe entities where the number of attributes (properties, parameters) that can be used to describe them is potentially vast, but the number that will actually apply to a given entity is relatively modest
also known as object–attribute–value model, vertical database model and open schema
In an EAV data model, each attribute-value pair is a fact describing an entity, and a row in an EAV table stores a single fact
EAV tables are often described as "long and skinny": "long" refers to the number of rows, "skinny" to the few columns
Data is recorded as three columns:
The entity: the item being described.
The attribute or parameter: a foreign key into a table of attribute definitions. At the very least, the attribute definitions table would contain the following columns: an attribute ID, attribute name, description, data type, and columns assisting input validation
The value of the attribute
Row modeling, where facts about something (in this case, a sales transaction) are recorded as multiple rows rather than multiple columns
differences between row modeling and EAV (which may be considered a generalization of row-modeling) are:
A row-modeled table is homogeneous in the facts that it describes
The data type of the value column/s in a row-modeled table is pre-determined by the nature of the facts it records. By contrast, in an EAV table, the conceptual data type of a value in a particular row depend on the attribute in that row
In the EAV table itself, this is just an attribute ID, a foreign key into an Attribute Definitions table
The Attribute
The Value
Coercing all values into strings
larger systems use separate EAV tables for each data type (including binary large objects, "BLOBS"), with the metadata for a given attribute identifying the EAV table in which its data will be stored
Where an EAV system is implemented through RDF, the RDF Schema language may conveniently be used to express such metadata
access to metadata must be restricted, and an audit trail of accesses and changes put into place to deal with situations where multiple individuals have metadata access
quality of the annotation and documentation within the metadata (i.e., the narrative/explanatory text in the descriptive columns of the metadata sub-schema) must be much higher, in order to facilitate understanding by various members of the development team.
Attribute metadata
Validation metadata include data type, range of permissible values or membership in a set of values, regular expression match, default value, and whether the value is permitted to be null
Presentation metadata: how the attribute is to be displayed to the user
Grouping metadata: Attributes are typically presented as part of a higher-order group, e.g., a specialty-specific form. Grouping metadata includes information such as the order in which attributes are presented
In general, JDBC Templates are a poor solution. They don’t have enough abstraction to work on different databases because you use plain SQL in queries. There is also no real ORM mapping which results in quite a lot of boilerplate code
Using @DataSourceDefinition to configure a DataSource
This annotation requires that a data source implementation class (generally from a JDBC driver JAR) be present on the class path (either by including it in your application, or deploying it as a top-level JAR and referring to it via MANIFEST.MF's Class-Path attribute) and be named explicitly.
this annotation bypasses the management layer and as such it is recommended only for development and testing purposes
Defining a Managed DataSource
Installing a JDBC driver as a deployment
Installing the JDBC Driver
deployment or as a core module
managed by the application server (and thus take advantage of the management and connection pooling facilities it provides), you must perform two tasks. First, you must make the JDBC driver available to the application server; then you can configure the data source itself. Once you have performed these tasks you can use the data source via standard JNDI injection.
recommended way to install a JDBC driver into the application server is to simply deploy it as a regular JAR deployment. The reason for this is that when you run your application server in domain mode, deployments are automatically propagated to all servers to which the deployment applies; thus distribution of the driver JAR is one less thing for administrators to worry about.
Note on MySQL driver and JDBC Type 4 compliance: while the MySQL driver (at least up to 5.1.18) is designed to be a Type 4 driver, its jdbcCompliant() method always return false. The reason is that the driver does not pass SQL 92 full compliance tests, says MySQL. Thus, you will need to install the MySQL JDBC driver as a module (see below).
define your module with a module.xml file, and the actual jar file that contains your database driver
content of the module.xml file
Under the root directory of the application server, is a directory called modules
module name, which in this example is com.mysql
where the implementation is, which is the resource-root tag with the path element
define any dependencies you might have. In this case, as the case with all JDBC data sources, we would be dependent on the Java JDBC API's, which in this case in defined in another module called javax.api, which you can find under modules/javax/api/main as you would expect.