Overview
The Oracle
Database File System (DBFS) creates a file system interface to files stored in
the database. DBFS is similar to NFS in that it provides a shared network file
system that looks like a local file system. Because the data is stored in the
database, the file system inherits all the HA/DR capabilities provided by the
database.
With DBFS,
the server is the Oracle Database. Files are stored as SecureFiles LOBs. PL/SQL
procedures implement file system access primitives such as create, open, read,
write, and list directory. The implementation of the file system in the
database is called the DBFS SecureFiles Store. The DBFS SecureFiles Store
allows users to create file systems that can be mounted by clients. Each file
system has its own dedicated tables that hold the file system content.
In
this article we will demonstrate the steps to configure Oracle Database
Filesystem (DBFS) on Oracle Exadata Database Machine.
Note:
On platforms other than Oracle Exadata Database Machine, additional setup steps
may be required to install the required fuse RPM packages which are installed
by default on Oracle Database Machine database servers.
Assumption
- Exadata DBM running Oracle 11.2 Software
- User equivalence for root and Oracle user is setup between compute nodes
- dbs_group file is created in users home directory containing hostname or IP per line
oraclouddbadm01
oraclouddbadm02
- root and Oracle user password
Steps
to Configure DBFS on Exadata Database Machine
- Add the oracle user to the fuse group on Linux. Run these commands as the root user.
- Create the /etc/fuse.conf file with the user_allow_other option. Ensure proper privileges are applied to this file.
[root@oraclouddbadm01
~]# dcli -g ~/dbs_group -l root chmod 644 /etc/fuse.conf
[root@oraclouddbadm01
~]# cat /etc/fuse.conf
user_allow_other
- On all Compute nodes, create a directory that will be used as the mount point for the DBFS file system.
[root@oraclouddbadm01
~]# ls -l /dbfs_direct
total 0
- Change ownership on the mount point directory so oracle can access it.
- Restart the Clusterware to pickup additional group (fuse) membership for Oracle user.
[root@oraclouddbadm01
~]# dcli -g ~/dbs_group -l root /u01/app/11.2.0.4/grid/bin/crsctl stop crs
[root@oraclouddbadm01
~]# dcli -g ~/dbs_group -l root /u01/app/11.2.0.4/grid/bin/crsctl start crs
oraclouddbadm01:
CRS-4123: Oracle High Availability Services has been started.
oraclouddbadm02:
CRS-4123: Oracle High Availability Services has been started.
- Create a database to hold the DBFS repository. Refer to the link below to create the DBFS repository database.
- To create the repository, create a new tablespace to hold the DBFS objects and a database user that will own the objects.
Tablespace
created.
- Create a user and assign necessary privileges to create objects.
User
created.
SQL>
grant create session, create table, create view, create procedure, dbfs_role to
dbfs_user;
Grant succeeded.
- Now create the database objects that will hold DBFS. The script takes 2 arguments tablespace name and File System name.
ORACLE_SID =
[fsdb1] ? fsdb1
The Oracle
base remains unchanged with value /u01/app/oracle
oraclouddbadm01-fsdb1
{/home/oracle}:cd $ORACLE_HOME/rdbms/admin
oraclouddbadm01-fsdb1
{/u01/app/oracle/product/11.2.0.4/dbhome_1/rdbms/admin}:sqlplus
dbfs_user/oracle
SQL*Plus:
Release 11.2.0.4.0 Production on Fri Jul 4 12:46:22 2014
Copyright (c)
1982, 2013, Oracle. All rights reserved.
Connected
to:
Oracle
Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the
Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining
and Real Application Testing options
SQL>
start dbfs_create_filesystem dbfsts FS1
No errors.
--------
CREATE
STORE:
begin
dbms_dbfs_sfs.createFilesystem(store_name => 'FS_FS1', tbl_name =>
'T_FS1',
tbl_tbs => 'dbfsts', lob_tbs => 'dbfsts', do_partition => false,
partition_key
=> 1, do_compress => false, compression => '', do_dedup => false,
do_encrypt
=> false); end;
--------
REGISTER
STORE:
begin
dbms_dbfs_content.registerStore(store_name=> 'FS_FS1', provider_name =>
'sample1',
provider_package => 'dbms_dbfs_sfs'); end;
--------
MOUNT STORE:
begin
dbms_dbfs_content.mountStore(store_name=>'FS_FS1', store_mount=>'FS1');
end;
--------
CHMOD STORE:
declare m
integer; begin m := dbms_fuse.fs_chmod('/FS1', 16895); end;
No errors.
- Perform the one-time setup steps for mounting the filesystem. You can download the mount-dbfs.sh script attached to the MOS note 1054431.1. It provides the logic and necessary scripting to mount DBFS as a cluster resource.
- The one-time setup steps required for each of the two mount methods (dbfs_client or mount). There are two options for mounting the DBFS filesystem and each will result in the filesystem being available at /dbfs_direct. Choose one of the two options.
- The first option is to utilize the dbfs_client command directly, without using an Oracle Wallet. There are no additional setup steps required to use this option.
- The second option is to use the Oracle Wallet to store the password and make use of the mount command.
- On All Compute nodes, set the library path on all nodes using the commands that follow:
[root@oraclouddbadm01 ~]# dcli -g
dbs_group -l root ln -s
/u01/app/oracle/product/11.2.0.4/dbhome_1/lib/libnnz11.so
/usr/local/lib/libnnz11.so
[root@oraclouddbadm01 ~]# dcli -g
dbs_group -l root ln -s
/u01/app/oracle/product/11.2.0.4/dbhome_1/lib/libclntsh.so.11.1
/usr/local/lib/libclntsh.so.11.1
[root@oraclouddbadm01 ~]# dcli -g
dbs_group -l root ln -s /lib64/libfuse.so.2 /usr/local/lib/libfuse.so.2
[root@oraclouddbadm01 ~]# dcli -g
dbs_group -l root 'echo /usr/local/lib >>
/etc/ld.so.conf.d/usr_local_lib.conf'
[root@oraclouddbadm01 ~]# dcli -g
dbs_group -l root ldconfig
- Create a new TNS_ADMIN directory ($HOME/dbfs/tnsadmin) for exclusive use by the DBFS mount script.
- Create the $HOME/dbfs/tnsadmin/tnsnames.ora file with the following contents on the first node. Here the name of the DBFS repository database is fsdb and the instance on the first node is named fsdb1 and ORACLE_HOME is /u01/app/oracle/product/11.2.0.4/dbhome_1.
oraclouddbadm01-dbm011
{/home/oracle}:cat $HOME/dbfs/tnsadmin/tnsnames.ora
fsdb.local =
(DESCRIPTION =
(ADDRESS =
(PROTOCOL=BEQ)
(PROGRAM=/u01/app/oracle/product/11.2.0.4/dbhome_1/bin/oracle)
(ARGV0=oraclefsdb1)
(ARGS='(DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=BEQ)))')
(ENVS='ORACLE_HOME=/u01/app/oracle/product/11.2.0.4/dbhome_1,ORACLE_SID=fsdb1')
)
(CONNECT_DATA=(SID=fsdb1))
)
- On other nodes, create similar entries (all using the name "fsdb.local") and change all occurrences of fsdb1 to the appropriate instance name to match the instance name running on the node where that tnsnames.ora file resides. The tnsnames.ora file on each node will be slightly different so that each tnsnames.ora file references the instance running locally on that node.
oraclouddbadm02-dbm012
{/home/oracle/dbfs/tnsadmin}:cat tnsnames.ora
fsdb.local =
(DESCRIPTION =
(ADDRESS =
(PROTOCOL=BEQ)
(PROGRAM=/u01/app/oracle/product/11.2.0.4/dbhome_1/bin/oracle)
(ARGV0=oraclefsdb2)
(ARGS='(DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=BEQ)))')
(ENVS='ORACLE_HOME=/u01/app/oracle/product/11.2.0.4/dbhome_1,ORACLE_SID=fsdb2')
)
(CONNECT_DATA=(SID=fsdb2))
)
- On each node, create the $HOME/dbfs/tnsadmin/sqlnet.ora file with the same contents on each node.
oraclouddbadm01-dbm011
{/home/oracle}:cat $HOME/dbfs/tnsadmin/sqlnet.ora
WALLET_LOCATION
=
(SOURCE=(METHOD=FILE)
(METHOD_DATA=(DIRECTORY=/home/oracle/dbfs/wallet))
)
SQLNET.WALLET_OVERRIDE
= TRUE
- Copy the file to all nodes using dcli:
- Create a wallet directory on one database server as the oracle user. For example:
- Create an empty auto-login wallet:
Oracle
Secret Store Tool : Version 11.2.0.4.0 - Production
Copyright
(c) 2004, 2013, Oracle and/or its affiliates. All rights reserved.
Enter
password:
Enter
password again:
- Add the necessary credentials to the wallet. The credentials can be specific for the connect string used as shown here:
Oracle
Secret Store Tool : Version 11.2.0.4.0 - Production
Copyright (c)
2004, 2013, Oracle and/or its affiliates. All rights reserved.
Enter wallet
password:
Create
credential oracle.security.client.connect_string1
- Copy the wallet files to all database nodes.
oraclouddbadm01-dbm011
{/home/oracle}:dcli -g ~/dbs_group -l oracle -d $HOME/dbfs/wallet -f
$HOME/dbfs/wallet/ewallet.p12
oraclouddbadm01-dbm011
{/home/oracle}:dcli -g ~/dbs_group -l oracle -d $HOME/dbfs/wallet -f $HOME/dbfs/wallet/cwallet.sso
- Ensure that the TNS entry specified above (fsdb.local in the example) is defined and working properly (checking with "TNS_ADMIN=/home/oracle/dbfs/tnsadmin tnsping fsdb.local" is a good test).
oraclouddbadm01:
OK (20 msec)
oraclouddbadm02:
OK (20 msec)
- Download the mount-dbfs.sh script attached to MOS note 1054431.1 and place it on one database server in a temporary location (like /tmp/mount-dbfs.sh).
[root@oraclouddbadm01
~]# dos2unix /tmp/mount-dbfs.sh
dos2unix:
converting file /tmp/mount-dbfs.sh to UNIX format ...
- Edit the variable settings in the top of the script for your environment. Comments in the script will help you to confirm the values for these variables.
MOUNT_POINT --> /dbfs_direct
DBFS_USER --> dbfs_user
ORACLE_HOME (should be the RDBMS
ORACLE_HOME directory) --> /u01/app/oracle/product/11.2.0.4/dbhome_1
LOGGER_FACILITY (used by syslog to log the
messages/output from this script) --> user
MOUNT_OPTIONS --> allow_other,direct_io
DBFS_PASSWD (used only if WALLET=false)
--> oracle
DBFS_PWDFILE_BASE (used only if
WALET=false) --> /tmp/.dbfs-passwd.txt
WALLET (must be true or false) --> TRUE
TNS_ADMIN (used only if WALLET=true) -->
/home/oracle/dbfs/tnsadmin
DBFS_LOCAL_TNSALIAS --> fsdb.local
- After editing, copy the script to the proper directory (GI_HOME/crs/script) on database nodes and set proper permissions on it, as the root user:
[root@oraclouddbadm01
~]# dcli -g ~/dbs_group -l root chown oracle:dba
/u01/app/11.2.0.4/grid/crs/script/mount-dbfs.sh
[root@oraclouddbadm01
~]# dcli -g ~/dbs_group -l root chmod 750
/u01/app/11.2.0.4/grid/crs/script/mount-dbfs.sh
- Now that we have completed the one-time setup, the Clusterware resource for DBFS mounting can now be registered. Create this short script and run it as the RDBMS owner (oracle) on only one compute node in the cluster.
oraclouddbadm01-dbm011
{/home/oracle}:cat add-dbfs-resource.sh
#!/bin/bash
ACTION_SCRIPT=/u01/app/11.2.0.4/grid/crs/script/mount-dbfs.sh
RESNAME=dbfs_mount
DBNAME=fsdb
DBNAMEL=`echo
$DBNAME | tr A-Z a-z`
ORACLE_HOME=/u01/app/11.2.0.4/grid
PATH=$ORACLE_HOME/bin:$PATH
export PATH
ORACLE_HOME
crsctl add
resource $RESNAME \
-type local_resource \
-attr "ACTION_SCRIPT=$ACTION_SCRIPT, \
CHECK_INTERVAL=30,RESTART_ATTEMPTS=10, \
START_DEPENDENCIES='hard(ora.$DBNAMEL.db)pullup(ora.$DBNAMEL.db)',\
STOP_DEPENDENCIES='hard(ora.$DBNAMEL.db)',\
SCRIPT_TIMEOUT=300"
- Then run this as the Grid Infrastructure owner (typically oracle) on one database server only:
When
successful, this command has no output.
- It is not necessary to restart the database resource at this point, however, you should review the following note regarding restarting the database now that the dependencies have been added.
oraclouddbadm01-dbm011
{/home/oracle}:srvctl status database -d fsdb
Instance
fsdb1 is not running on node oraclouddbadm01
Instance
fsdb2 is not running on node oraclouddbadm02
oraclouddbadm01-dbm011
{/u01/app/11.2.0.4/grid/crs/script}:srvctl start database -d fsdb
oraclouddbadm01-dbm011
{/u01/app/11.2.0.4/grid/crs/script}:srvctl status database -d fsdb
Instance
fsdb1 is running on node oraclouddbadm01
Instance
fsdb2 is running on node oraclouddbadm02
Note: After
creating the $RESNAME resource, in order to stop the $DBNAME database when the
$RESNAME resource is ONLINE, you will have to specify the force flag when using
srvctl.
For example:
"srvctl stop database -d fsdb -f". If you do not specify the -f flag,
you will receive an error like this:
(oracle)$
srvctl stop database -d fsdb
PRCD-1124 :
Failed to stop database fsdb and its services
PRCR-1065 :
Failed to stop resource (((((NAME STARTS_WITH ora.fsdb.) && (NAME
ENDS_WITH .svc)) && (TYPE == ora.service.type)) && ((STATE !=
OFFLINE) || (TARGET != OFFLINE))) || (((NAME == ora.fsdb.db) && (TYPE
== ora.database.type)) && (STATE != OFFLINE)))
CRS-2529:
Unable to act on 'ora.fsdb.db' because that would require stopping or
relocating 'dbfs_mount', but the force option was not specified
Using the -f
flag allows a successful shutdown and results in no output.
How to Manage DBFS mount
- After the resource is created, you should be able to see the dbfs_mount resource by running crsctl stat res dbfs_mount and it should show OFFLINE on all nodes as below:
--------------------------------------------------------------------------------
NAME TARGET STATE
SERVER
STATE_DETAILS
--------------------------------------------------------------------------------
Local
Resources
--------------------------------------------------------------------------------
dbfs_mount
OFFLINE OFFLINE oraclouddbadm01
OFFLINE OFFLINE oraclouddbadm02
- To bring dbfs_mount online which will mount the filesystem on all nodes, run crsctl start resource dbfs_mount from any cluster node. This will mount DBFS on all nodes. For example:
CRS-2672:
Attempting to start 'dbfs_mount' on 'oraclouddbadm02'
CRS-2672:
Attempting to start 'dbfs_mount' on 'oraclouddbadm01'
CRS-2676:
Start of 'dbfs_mount' on 'oraclouddbadm01' succeeded
CRS-2676:
Start of 'dbfs_mount' on 'oraclouddbadm02' succeeded
oraclouddbadm01-dbm011
{/u01/app/11.2.0.4/grid/crs/script}:/u01/app/11.2.0.4/grid/bin/crsctl stat res
dbfs_mount -t
--------------------------------------------------------------------------------
NAME TARGET STATE
SERVER
STATE_DETAILS
--------------------------------------------------------------------------------
Local
Resources
--------------------------------------------------------------------------------
dbfs_mount
ONLINE ONLINE
oraclouddbadm01
ONLINE ONLINE
oraclouddbadm02
- Once the dbfs_mount Clusterware resource is online, you should be able to see the mount point with df -h on each node. Also, the default startup for this resource is "restore" which means that if it is online before Clusterware is stopped, it will attempt to come online after Clusterware is restarted.
Filesystem Size Used Avail Use% Mounted on
dbfs 1.5M 40K
1.4M 3% /dbfs_direct
- To unmount DBFS on all nodes, run this as the oracle user:
Conclusion
In this article we have learnt how to configure Database File System (DBFS) on Exadata Database Machine running Oracle Database software 11.2.0.4. The DBFS can be used for many things like store large file, ETL, GoldenGate trail files and check point files and so on.
No comments:
Post a Comment