To first start working with this database, the root password must be set. The word root does not apply to the system's root, but to the database administrator, however, it can be the same person. So let's set it and log in:
mysqladmin -u root password 'thepassword'
mysql -u root -p
Enter password:
Welcome to the MySQL monitor. Commands end with ; or g.
Your MySQL connection id is 10 to server version: 5.0.20a-Debian_1-log
Type 'help;' or 'h' for help. Type 'c' to clear the buffer.
/etc/mysql/my.cnf
apt-get install php5
Installing and Configuring PHP
Now, add support for MySQL:
apt-get install php5-mysql
Just like for Apache and MySQL, extra packages will have to be install as well: apache2-mpm-prefork, libapache2-mod-php5 and php5-common.
The configuration file for PHP is located in /etc/php5/apache2/php.ini
every time you modify it, Apache must be restarted.
/var/www/information.php
Installing and Configuring Postfix
The default configuration files are in /etc/postfix, we will only use main.cf
In this article we explain how you can utilize the apache authentication to restrict access to you website or parts of your website.
You have to create the files .htaccess and .htpasswd. These files are protected by the server software so you can not download or view them with your web browser.
We will restrict Odoo to only the localhost still running at port 8069
You probably want make it easier for you and your users, having a reverse proxy setup in order to accept the connect at port 80 or 443. We will restrict Odoo to only the localhost still running at port 8069.
Schema on write
This is the traditional approach for Business Intelligence. A model, often dimensional, is built as part of the design process. This model is an abstraction of the complexity of the underlying systems, put in business terms. The purpose of the model is to allow the business users to interrogate the data in a way they understand.
The model is instantiated through physical database tables and the date is loaded through an ETL (extract, transform and load) process that takes data from one or more source systems and transforms it to fit the model, then loads it into the model.
The key thing is that the model is determined before the data is finally written and the users are very much guided or driven by the model in how they query the data and what results they can get from the system. The designer must anticipate the queries and requests in advance of the user asking the questions.
Schema on read
Schema on read works on a different principle and is more common in the Big Data world. The data is not transformed in any way when it is stored, the data store acts as a big bucket.
The modelling of the data only occurs when the data is read. Map/Reduce is the clearest example, the mapping is the understanding of the data structure. Hadoop is a large distributed file system, which is very good at storing large volumes of data, this is potential. It is only the mapping of this data that provides value, this is done when the data is read, not written.
New World Order
So whereas Business Intelligence used to always be driven by the model, the ETL process to populate the model and the reporting tool to query the model, there is now an approach where the data is collected its raw form, and advanced statistical or analytical tools are used to interrogate the data. An example of one such tool is R.
The driver for which approach to use is often driven by what the user wants to find out. If the question is clearly formed and the sources of data that are required to answer it well understood, for example how many units of a product have we sold, then the traditional schema on write approach is best.
japtone
Senior Member
Join Date
Nov 2010
Posts
49
Re: Transferring SugarCRM to a new server
If you're using Linux try to have the same version of PHP, Apache, and DB (MySQL for instance) in order to avoid compatibility issues. In your production server tar up the sugarcrm root directory, transfer it to the new server and untar wherever your new root directory will be.
Next take a db dump of your database, transfer it to the new server and do a restore. Make sure apache is configured on the new server to point to the root of sugarcrm and start it up.
Make sure to modify config.php to account for any change in paths and hostname.
that's what I've found to be the easiest way to 'clone' sugar.
mysqldump -h localhost -u [MySQL user, e.g. root] -p[database password] -c --add-drop-table --add-locks --all --quick --lock-tables [name of the database] > sqldump.sql
Copy Filesystem
Copy all your files to the new server. This can be done simply by locating the root directory on your old instance and copy and pasting it to the new server location.
Import Database
Import the mysql database into the new server. Here's how you would restore your custback.sql file to the Customers database.
mysql -u sadmin -p pass21 Customers < custback.sql
Here's the general format you would follow:
mysql -u [username] -p [password] [database_to_restore] < [backupfile]
Check Files and Permissions
Check Config.php
Open <sugarroot/config.php> and make sure that all settings still apply to the new server, such as: array ( 'db_host_name' => 'localhost', 'db_user_name' => 'root', 'db_password' => 'PASSWORD', 'db_name' => 'DATABASE_NAME', 'db_type' => 'mysql', ), 'site_url' =>, etc...
Check htaccess
Open <sugarroot/.htaccess> and ensure that the new server URLs are used correctly.
Check Permissions
Check that the permissions are correct on the new server. That is the entire custom and cache directories (and all the sub directories) in addition to the config.php file are owned and writable by the user that runs the application on the server.
.htaccess
The .htaccess file is a simple text file placed in the directory you want the contents of the file to affect. The rules and configuration directives in the .htaccess file will be enforced on whatever directory it is in and all sub-directories as well. In order to password protect content, there are a few directives we must become familiar with. One of these directives in the .htaccess file ( the AuthUserFile directive ) tells the Apache web server where to look to find the username/password pairs.
.htpasswd
The .htpasswd file is the second part of the affair. The .htpasswd file is also a simple text file. Instead of directives, the .htpasswd file contains username/password pairs. The password will be stored in encrypted form and the username will be in plaintext.
There is a special program on a *nix machine that is designed to manipulate the .htpasswd file on your behalf. The name of this program is htpasswd.
The first way is to create a new .htpasswd file and add a username/password pair to the file. The second way is to add a username/password pair to an existing .htpasswd file.
To create a new .htpasswd file in /usr/uj/jurbanek/ with username john, the following command would be used.
# '-c' stands for 'create'. Only to be used when creating a new .htpasswd file.
# You will be prompted for the password you would like to use after entering the command below.
htpasswd -c /usr/uj/jurbanek/.htpasswd john
Example
Here is an example on how to prevent users from access the directory, password-protect a specific file and allow userse to view a specific file: AuthUserFile /your/path/.htpasswd
AuthName "Authorization Required"
AuthType Basic
Order Allow,Deny
<Files myfile1.html>
Order Allow,Deny
require valid-user
</Files>
<Files myfile2.html>
Order Deny,Allow
</Files>
Hadoop is a framework written in Java for running applications on large clusters of commodity hardware and incorporates features similar to those of the Google File System (GFS) and of the Map Reduce computing paradigm. Hadoop’s HDFS is a highly fault-tolerant distributed file system and, like Hadoop in general, designed to be deployed on low-cost hardware. It provides high throughput access to application data and is suitable for applications that have large data sets.
Some of the Hadoop projects we will talk about are:
HDFS : A distributed filesystem that runs on large clusters of commodity machines.
Map Reduce: A distributed data processing model and execution environment that runs on large clusters of commodity machines.
Pig: A data flow language and execution environment for exploring very large datasets. Pig runs on HDFS and MapReduce clusters.
HBase: A distributed, column-oriented database. HBase uses HDFS for its underlying storage, and supports both batch-style computations using MapReduce and point queries (random reads).
ZooKeeper: A distributed, highly available coordination service. ZooKeeper provides primitives such as distributed locks that can be used for building distributed applications.
Oozie: Oozie is a workflow scheduler system to manage Apache Hadoop jobs.
Oracle Linux as the operating system and Hadoop 1.1.2 or 1.2.0