Skip to main content

Home/ SoftwareEngineering/ Group items tagged SQL

Rss Feed Group items tagged

kuni katsuya

List of unit testing frameworks - Wikipedia, the free encyclopedia - 0 views

kuni katsuya

Adobe - real time data streaming | Adobe LiveCycle Data Services ES3: Solutions - 0 views

  • BlazeDS
  • Java NIO high-performance messaging (thousands of clients per CPU) No Yes Real Time Messaging Protocol (RTMP) No Yes Data throttling No Yes Reliable communications
  • Managed remoting
  • ...6 more annotations...
  • Transaction (batch processing)
  • Data paging
  • Lazy loading (on demand)
  • Conflict resolution and synchronization
  • SQL adapter
  • Hibernate adapter No Yes "Fiber-aware" assembler No Yes Offline synchronization framework
kuni katsuya

Migrating from Spring to Java EE 6 - Part 4 | How to JBoss - 0 views

  • discuss the rationale for migrating your applications from Spring to Java EE 6 and show you real examples of upgrading the web UI, replacing the data access layer, migrating AOP to CDI interceptors, migrating JMX, how to deal with JDBC templates, and as an added bonus will demonstrate how to perform integration tests of you Java EE 6 application using Arquillian
  • EntityManagerClinicTest
  • There is also an interesting Arquillian Persistence extension that integrates DBUnit in Arquillian where you can define your test data externally
  • ...3 more annotations...
  • @RunWith(Arquillian.class)
  • JDBC Templates hardly give any abstraction on top of the database and you’re on your own for Object Relational Mapping. We strongly advise to use JPA wherever possible; it gives portability by abstracting most of the database specific SQL that you would need, and it does all the hard and painful work of object mapping
  • small part of your application
kuni katsuya

Article Series: Migrating Spring Applications to Java EE 6 - Part 3 | How to JBoss - 0 views

  • Stateless Session Bean is transactional by default
  • In this article we will discuss migrating the DAO layer, AOP and JMX
  • Migrating JDBC templates
  • ...3 more annotations...
  • In general, JDBC Templates are a poor solution. They don’t have enough abstraction to work on different databases because you use plain SQL in queries. There is also no real ORM mapping which results in quite a lot of boilerplate code
  • SimpleJdbcTemplate(ds)
  • @InterceptorBinding
  •  
    Stateless Session Bean is transactional by default.
kuni katsuya

Getting Started Developing Applications Guide - JBoss AS 7.1 - Project Documentation Ed... - 0 views

  • CDI + JPA + EJB + JTA + JSF: Login quickstart
  • displayed using JSF views, business logic is encapsulated in CDI beans, information is persisted using JPA, and transactions can be controlled manually or using EJB
  • Deploying the Login example using Eclipse
  • ...6 more annotations...
  • deploy the example by right clicking on the jboss-as-login project, and choosing Run As -> Run On Server
  • src/main/webapp directory
  • beans.xml and face-config.xml tell JBoss AS to enable CDI and JSF for the application
  • don't need a web.xml
  • src/main/resources
  • persistence.xml, which sets up JPA, and import.sql which Hibernate, the JPA provider in JBoss AS, will use to load the initial users into the application when the application starts
kuni katsuya

Data Source Configuration in AS 7 | JBoss AS 7 | JBoss Community - 0 views

  • Data Source Configuration in AS 7
  • Using @DataSourceDefinition to configure a DataSource
  • This annotation requires that a data source implementation class (generally from a JDBC driver JAR) be present on the class path (either by including it in your application, or deploying it as a top-level JAR and referring to it via MANIFEST.MF's Class-Path attribute) and be named explicitly.
  • ...21 more annotations...
  • this annotation bypasses the management layer and as such it is recommended only for development and testing purposes
  • Defining a Managed DataSource
  • Installing a JDBC driver as a deployment
  • Installing the JDBC Driver
  • deployment or as a core module
  • managed by the application server (and thus take advantage of the management and connection pooling facilities it provides), you must perform two tasks.  First, you must make the JDBC driver available to the application server; then you can configure the data source itself.  Once you have performed these tasks you can use the data source via standard JNDI injection.
  • recommended way to install a JDBC driver into the application server is to simply deploy it as a regular JAR deployment.  The reason for this is that when you run your application server in domain mode, deployments are automatically propagated to all servers to which the deployment applies; thus distribution of the driver JAR is one less thing for administrators to worry about.
  • Note on MySQL driver and JDBC Type 4 compliance: while the MySQL driver (at least up to 5.1.18) is designed to be a Type 4 driver, its jdbcCompliant() method always return false. The reason is that the driver does not pass SQL 92 full compliance tests, says MySQL. Thus, you will need to install the MySQL JDBC driver as a module (see below).
  • Installing a JDBC driver as a module
  • <module xmlns="urn:jboss:module:1.0" name="com.mysql">  <resources>    <resource-root path="mysql-connector-java-5.1.15.jar"/>  </resources>  <dependencies>    <module name="javax.api"/>  </dependencies></module>
  • jboss-7.0.0.<release>/modules/com/mysql/main
  • define your module with a module.xml file, and the actual jar file that contains your database driver
  • content of the module.xml file
  • Under the root directory of the application server, is a directory called modules
  • module name, which in this example is com.mysql
  • where the implementation is, which is the resource-root tag with the path element
  • define any dependencies you might have.  In this case, as the case with all JDBC data sources, we would be dependent on the Java JDBC API's, which in this case in defined in another module called javax.api, which you can find under modules/javax/api/main as you would expect.
  • Defining the DataSource itself
  •    <datasource jndi-name="java:jboss/datasources/MySqlDS" pool-name="MySqlDS">      <connection-url>jdbc:mysql://localhost:3306/EJB3</connection-url>         <driver>com.mysql</driver>
  •     <drivers>      <driver name="com.mysql" module="com.mysql">        <xa-datasource-class>com.mysql.jdbc.jdbc2.optional.MysqlXADataSource</xa-datasource-class>      </driver>    </drivers>
  • jboss-7.0.0.<release>/domain/configuration/domain.xml or jboss-7.0.0.<release>/standalone/configuration/standalone.xml
kuni katsuya

Comparison of database tools - Wikipedia, the free encyclopedia - 0 views

kuni katsuya

EntityManager (Java EE 6 ) - 0 views

  • createNamedQuery(java.lang.String name)
  • Java Persistence query language or in native SQL
kuni katsuya

Selling Weld and EE6 | Weld | JBoss Community - 0 views

  • regarding the issue of selling Weld and EE6 to developers/shops....
  • How bout a JdbcTemplate Spring equivalent in the case of projects using legacy db schemas
  • portable extension to Weld
  • ...32 more annotations...
  • William Drai
  • Honestly I don't see any value in switching to CDI if it is
  • to reproduce the same awful patterns
  • please not this Dao/Template mess
  • Gavin King
  • Their template pattern is a solution in search of a problem
    • kuni katsuya
       
      gold! :)
  • to reproduce the same awful patterns
  • please not this Dao/Template mess
  • Because, of course, there are no other well-known patterns for dealing with boiler-plate cleanup code and connection leaks.
  • This is exactly the kind of
  • brain-damage that Spring does to people!
    • kuni katsuya
       
      platinum!!!
  • It gives people a
  • half-assed solution
  • and somehow shuts down their brains so they
  • stop asking themselves how this solution could be improved upon
  • It's a very impressive magic trick, and I wish I knew how to do it myself. But then, I'm just not like that. I'm always trying to poke holes in things - whether they were Invented Here or Not.
  • but that might be too high-level for your taste. Their are other, less-abstract options.
  • exception handling, this is one area where Spring does a good job: "The Spring Framework's handling of SQLException is one of its most useful features in terms of enabling easier JDBC development and maintenance. The Spring Framework provides JDBC support that abstracts SQLException and provides a DAO-friendly, unchecked exception hierarchy."
  • Utter nonsense and dishonest false advertising
  • Automatic connection closing (and other boiler-plate code) is obviously a hard requirement to be handled by the fwk.
  • Pffffff. It's a trivial requirement which I can solve in my framework with two lines of code in a @Disposes method. Did you see any connection handling in the code above?
  • I mean, seriously guys. The Spring stuff is trivial and not even very elegant. I guess it's easier for me to see that, since I spent half my career thinking about data access and designing data access APIs. But even so...
  • I don't understand. You hate the ability to write typesafw SQL that much?
  • Gavin King
  • Methods with long argument lists are a code smell.
  • It's something Spring copied from Hibernate 1.x, back in the days before varargs
  • It's something we removed in Hibernate2 and JPA.
  • there are a bunch of people
  • who don't want to use JPA.
  • They don't understand, or see the value of, using managed objects to represent their persistent data.
  • Um. Why? Why would that be a bad thing? I imagine that any app with 1000 queries has tens of thousands of classes already. What's the problem? Why is defining a class worse than writing a method?
  • Are you working from some totally bizarre metric where you measure code quality by number of classes?
kuni katsuya

MySQL Error Number 1005 Can't create table '.mydb#sql-328_45.frm' (errno: 150) | VerySi... - 0 views

  • MySQL Error Number 1005 Can’t create table
  • (errno: 150)
  • SHOW ENGINE INNODB STATUS
  • ...12 more annotations...
  • Known Causes:
  • First Steps:
  • dreaded errno 150:
  • The two key fields type and/or size is not an exact match
  • One of the key field that you are trying to reference does not have an index and/or is not a primary key
  • One or both of your tables is a MyISAM table
  • You have specified a cascade ON DELETE SET NULL, but the relevant key field is set to NOT NULL
  • Make sure that the Charset and Collate options are the same both at the table level as well as individual field level for the key columns
  • You have a default value (ie default=0) on your foreign key column
  • You have a syntax error in your ALTER statement or you have mistyped one of the field names in the relationship
  • The name of your foreign key exceeds the
  • max length of 64 chars
    • kuni katsuya
       
      64 char max? seriously??? in this century?!
kuni katsuya

MySQL :: MySQL 5.7 Reference Manual :: 5.4.4.2 Configurable InnoDB Auto-Increment Locking - 0 views

  • Configurable
    • kuni katsuya
       
      new and improved!(?)
  • table-level locks held until the end of a statement make INSERT statements using auto-increment safe for use with
  • statement-based replication
  • ...24 more annotations...
  • However, those locks limit concurrency and scalability when multiple transactions are executing insert statements at the same time
  • For INSERT statements where the number of rows to be inserted is known at the beginning of processing the statement, InnoDB quickly allocates the required number of auto-increment values without taking any lock, but only if there is no concurrent session already holding the table-level AUTO-INC lock (because that other statement will be allocating auto-increment values one-by-one as it proceeds)
  • obtains auto-increment values under the control of a mutex (a light-weight lock) that is not held until the statement completes, but only for the duration of the allocation process
  • innodb_autoinc_lock_mode = 0 (“traditional” lock mode)
  • special table-level AUTO-INC lock is obtained and held to the end of the statement
  • lock mode is provided for:
  • Backward compatibility.
  • innodb_autoinc_lock_mode = 1 (“consecutive” lock mode)
  • important impact of this lock mode is significantly better scalability
  • This mode is safe for use with
  • statement-based replication
  • innodb_autoinc_lock_mode = 2 (“interleaved” lock mode)
  • This is the fastest and most scalable lock mode
  • but it is
  • not safe
  • when using
  • statement-based replication
  • recovery scenarios when SQL statements are replayed from the binary log
  • Using auto-increment with replication
  • set innodb_autoinc_lock_mode to 0 or 1 and use the same value on the master and its slaves
  • Auto-increment values are not ensured to be the same on the slaves as on the master if you use innodb_autoinc_lock_mode = 2 (“interleaved”) or configurations where the master and slaves do not use the same lock mode
  • If you are using
  • row-based replication
  • all of the auto-increment lock modes are safe
kuni katsuya

MySQL :: MySQL 5.7 Reference Manual :: 5.1.4 Server System Variables - 0 views

‹ Previous 21 - 33 of 33
Showing 20 items per page