This is a follow up post, describing the implementation details of Hadoop Applier, and steps to configure and install it. Hadoop Applier integrates MySQL with Hadoop providing the real-time replication of INSERTs to HDFS, and hence can be consumed by the data stores working on top of Hadoop. You can know more about the design rationale and per-requisites in the previous post.
Design and Implementation:
Hadoop Applier replicates rows inserted into a table in MySQL to the Hadoop Distributed File System(HDFS). It uses an API provided by libhdfs, a C library to manipulate files in HDFS.
The
library comes pre-compiled with Hadoop distributions.
It
connects to the MySQL master (or read a binary
log generated by MySQL) and:
- fetches the row insert events occurring on the master
- decodes these events, extracts data inserted into each field of the row
- uses content handlers to get it in the format required and appends it to a text file in HDFS.
Schema equivalence is a simple mapping:
Databases are mapped as separate directories, with tables in them as sub-directories. Data inserted into each table is written into text files (named as datafile1.txt) in HDFS. Data can be in comma separated format; or any other delimiter can be used, that is configurable by command line arguments.
The
diagram explains the mapping between MySQL and HDFS schema.
The
timestamp at which the event occurs is included as the first field in
each row inserted in the text file.
The implementation follows these steps:
- Connect to the MySQL master using the interfaces to the binary log
#include “binlog_api.h”
Binary_log
binlog(create_transport(mysql_uri.c_str()));
binlog.connect();
- Register content handlers
/*
Table_index is a sub class of Content_handler class in the Binlog API
*/
Applier
replay_hndlr(&table_event_hdlr, &sqltohdfs_obj);
binlog.content_handler_pipeline()->push_back(&table_event_hdlr);
binlog.content_handler_pipeline()->push_back(&replay_hndlr);
- Start an event loop and wait for the events to occur on the master
while (true)
{
/*
Pull
events from the master. This is the heart beat of the event listener.
*/
Binary_log_event *event;
binlog.wait_for_next_event(&event);
}
-
Decode the event using the content handler interfaces
class Applier : public mysql::Content_handler
{
public:
Applier(Table_index *index, HDFSSchema *mysqltohdfs_obj)
{
m_table_index= index;
m_hdfs_schema= mysqltohdfs_obj;
}
mysql::Binary_log_event *process_event(mysql::Row_event
*rev)
{
int
table_id= rev->table_id;
typedef
std::map<long int, mysql::Table_map_event *> Int2event_map;
int2event_map::iterator ti_it=
m_table_index->find(table_id);
- Each row event contains multiple rows and fields.
Iterate one row at a time using Row_iterator.
mysql::Row_event_set rows(rev, ti_it->second);
mysql::Row_event_set::iterator it= rows.begin();
do
{
mysql::Row_of_fields fields= *it;
long
int timestamp= rev->header()->timestamp;
if
(rev->get_event_type() == mysql::WRITE_ROWS_EVENT)
table_insert(db_name, table_name, fields, timestamp,
m_hdfs_schema);
}
while (++it != rows.end());
- Get the field data separated by field delimiters and row delimiters.
mysql::Row_of_fields::const_iterator field_it= fields.begin();
mysql::Converter converter;
std::ostringstream data;
data
<< timestamp;
do
{
field_index_counter++;
std::vector<long int>::iterator it;
std::string
str;
converter.to(str, *field_it);
data
<< sqltohdfs_obj->hdfs_field_delim;
data
<< str;
}
while (++field_it != fields.end());
data
<< sqltohdfs_obj->hdfs_row_delim;
- Connect to the HDFS file system.
If not provided, the
connection information (user name, password host and port) are read
from the XML configuration file, hadoop-site.xml.
HdfsFS m_fs= hdfsConnect(host.c_str(), port);
- Create the directory structure in HDFS.
Set the working
directory and open the file in append mode.
hdfsSetWorkingDirectory(m_fs, (stream_dir_path.str()).c_str());
const
char* write_path= "datafile1.txt";
hdfsFile
writeFile;
writeFile= hdfsOpenFile(m_fs, write_path, O_WRONLY|O_APPEND, 0, 0, 0);
tSize
num_written_bytes = hdfsWrite(m_fs, writeFile, (void*)data,
strlen(data));
Follow these steps to install and run the Applier:
2.
Set the environment variable $HADOOP_HOME to point to the Hadoop
installation directory.
3.
CMake doesn't come with a 'find' module for libhdfs. Ensure that the
'FindHDFS.cmake' is in the CMAKE_MODULE_PATH. You can download a
copy here.
4.
Edit the file 'FindHDFS.cmake', if necessary, to have HDFS_LIB_PATHS
set as a path to libhdfs.so, and HDFS_INCLUDE_DIRS have the path
pointing to the location of hdfs.h.
For
1.x versions, library path is $ENV{HADOOP_HOME}/c++/Linux-i386-32/lib
, and header files are contained in
$ENV{HADOOP_HOME}/src/c++/libhdfs. For 2.x releases, header files and
libraries can be found in $ENV{HADOOP_HOME}/lib/native, and
$ENV{HADOOP_HOME}/include respectively.
For versions 1.x, this patch will fix the paths:
For versions 1.x, this patch will fix the paths:
--- a/cmake_modules/FindHDFS.cmake
+++ b/cmake_modules/FindHDFS.cmake
@@ -11,6 +11,7 @@ exec_program(hadoop ARGS version OUTPUT_VARIABLE
Hadoop_VERSION
# currently only looking in HADOOP_HOME
find_path(HDFS_INCLUDE_DIR hdfs.h PATHS
$ENV{HADOOP_HOME}/include/
+ $ENV{HADOOP_HOME}/src/c++/libhdfs/
# make sure we don't accidentally pick up a different version
NO_DEFAULT_PATH
)
@@ -26,9 +27,9 @@ endif()
message(STATUS "Architecture: ${arch_hint}")
if ("${arch_hint}" STREQUAL "x64")
- set(HDFS_LIB_PATHS $ENV{HADOOP_HOME}/lib/native)
+ set(HDFS_LIB_PATHS $ENV{HADOOP_HOME}/c++/Linux-amd64-64/lib)
else ()
- set(HDFS_LIB_PATHS $ENV{HADOOP_HOME}/lib/native)
+ set(HDFS_LIB_PATHS $ENV{HADOOP_HOME}/c++/Linux-i386-32/lib)
endif ()
message(STATUS "HDFS_LIB_PATHS: ${HDFS_LIB_PATHS}")
5.Since
libhdfs is JNI based API, it requires JNI header files and libraries
to build. If there exists a module FindJNI.cmake in the
CMAKE_MODULE_PATH and JAVA_HOME is set; the headers will be included,
and the libraries would be linked to. If not, it will be required to
include the headers and load the libraries
separately (modify LD_LIBRARY_PATH).
6.
Build and install the library 'libreplication', to be used by Hadoop Applier,using CMake.
- Download a copy of Hadoop Applier from http://labs.mysql.com.
- 'mysqlclient' library is required to be installed in the default library paths. You can either download and install it (you can get a copy here), or set the environment variable $MYSQL_DIR to point to the parent directory of MySQL source code. Make sure to run cmake on MySQL source directory.$export MYSQL_DIR=/usr/local/mysql-5.6
- Run the 'cmake' command on the parent directory of the Hadoop Applier source. This will generate the necessary Makefiles. Make sure to set cmake option ENABLE_DOWNLOADS=1; which will install Google Test required to run the unit tests.$cmake . -DENABLE_DOWNLOADS=1
- Run 'make' and 'make install' to build and install. This will install the library 'libreplication' which is to be used by Hadoop Applier.
7.
Make sure to set the CLASSPATH to all the hadoop jars needed to run
Hadoop itself.
$export PATH=$HADOOP_HOME/bin:$PATH
$export
CLASSPATH=$(hadoop classpath)
8.
The code for Hadoop Applier can be found in /examples/mysql2hdfs, in the Hadoop Applier
repository. To compile, you can simply load the libraries (modify
LD_LIBRARY_PATH if required), and run the command “make happlier” on your
terminal. This will create an executable file in the mysql2hdfs directory.
..
and then you are done!
Now
run hadoop dfs (namenode and datanode), start a MySQL server as
master with row based replication (you can use mtr rpl suite for
testing purposes : $MySQL-5.6/mysql-test$./mtr --start --suite=rpl --mysqld=--binlog_format='ROW' --mysqld=--binlog_checksum=NONE), start hive (optional) and run the executable
./happlier, optionally providing MySQL and HDFS uri's and other
available command line options. (./happlier –help for more info).
There
are useful filters as command line options to the Hadoop applier.
Options | Use |
-r,
--field-delimiter=DELIM
Use
DELIM instead of ctrl-A for field delimiter. DELIM can be a string
or an ASCII value in the format '\nnn' .
Escape sequences are not
allowed.
|
Provide the string by which the fields in a row will be separated. By default, it is set to ctrl-A |
-w,
--row-delimiter=DELIM
Use
DELIM instead of LINE FEED for row delimiter . DELIM can be a
string or an ASCII value in the format '\nnn'
Escape sequences are not
allowed.
|
Provide the string by which the rows of a table will be separated. By default, it is set to LINE FEED (\n) |
-d,
--databases=DB_LIST
DB-LIST
is made up of one database name, or many names separated by
commas.
Each
database name can be optionally followed by table names.
The
table names must follow the database name, separated by HYPHENS
|
Import entries for some databases, optionally include only specified tables. |
-f,
--fields=LIST
Similar
to cut command, LIST is made up of one range, or many ranges
separated by commas.Each range is one of:
N
N'th byte, character or field, counted from 1
N-
from N'th byte, character or field, to end of line
N-M from N'th to M'th
(included) byte,
character
or field
-M
from first to M'th (included) byte, character or field
|
Import
entries for some fields only in a table
|
-h, --help | Display help |
Integration
with HIVE:
Hive
runs on top of Hadoop. It
is sufficient to install Hive only on the Hadoop master node.
Take
note of the default data warehouse directory, set as a property in
hive-default.xml.template configuration file. This must be the same
as the base directory into which Hadoop Applier writes.
Since
the Applier does not import DDL statements; you have to create
similar schema's on both MySQL and Hive, i.e. set up a similar
database in Hive using Hive QL(Hive Query Language). Since timestamps
are inserted as first field in HDFS files,you must take this into
account while creating tables in Hive.
SQL Query | Hive Query |
CREATE TABLE t (i INT); |
CREATE
TABLE t ( time_stamp INT, i INT)
[ROW
FORMAT DELIMITED]
STORED AS TEXTFILE; |
Now, when any row is inserted into table on MySQL databases, a corresponding entry is made in the Hive tables. Watch the demo to get a better understanding.
The demo is non audio, and is meant to be followed in conjunction with the blog.You can also create an external table in hive and load data into the tables; its your choice!
Watch the Hadoop Applier Demo >>
Limitations
of the Applier:
In
the first version we support WRITE_ROW_EVENTS, i.e. only insert
statements are replicated.
We
have considered adding support for deletes, updates and DDL's as
well, but they are more complicated to handle and we are not sure how
much interest there is in this.
We
would very much appreciate your feedback on requirements - please use
the comments section of this blog to let us know!
The
Hadoop Applier is compatible with MySQL 5.6,
however it does not import events if binlog checksums are enabled. Make sure to set them to NONE on the master, and
the server in row based replication mode.
This
innovation includes dedicated contribution from Neha Kumari, Mats
Kindahl and Narayanan Venkateswaran. Without them, this project would
not be a success.
Give
it a try! You can download a copy from http://labs.mysql.com and get started NOW!
Hi Shubhangi it's seem we are working on similar projects.
ReplyDeletehttps://github.com/noplay/python-mysql-replication
I think my last commit can interest you, is the implementation of checksum:
https://github.com/noplay/python-mysql-replication/pull/27
Perhaps we can exchange by mail (julien@duponchelle.info).
Hi Julien,
DeleteIt is a very interesting project, thank you for the offer.
However, we have implemented checksum already, and also, our development is in C++ .
Hi Shubhangi
ReplyDeleteGreat work... When will update/delete feature available?
Great work!! :D
ReplyDeleteOne question: does binlog.wait_for_next_event(&event); needs 'event' memory be released?
I find examples/mysql2hdfs/ releasing memory with 'delete event;', while the rest of the examples are not (memory leak?).
Hi!
DeleteYes, it is the responsibility of the caller to release the memory allocated to the event, as done in examples/mysql2hdfs.
Also, as you detect correctly, it is a bug in the example programs.
It has been reported, and will be fixed in the release.
Great Work!
ReplyDeleteHello Shubhangi !
ReplyDeleteNice project which is much needed and will eliminate few layers in complex workflows .
waiting for the project to get mertured :)
All the very best !!
Sandeep.Bodla
sandeep.bigdata@gmail.com
Shuhangi ,
ReplyDeletePlease let me know if it is opensource , want to contribute :)
Thanks,
Sandeep.
Hi Sandeep!
DeleteThank you for trying out the product, and it is great to see you willing to contribute to it. Thanks!
The product is GPL, so it is open source by the definition of the Open Source Initiative, and you can definitely contribute to it. To start with patches, you have to sign the OCA, Oracle contributor agreement.
However,please note that we are refactoring the code, and I am not sure if your patches would work.
Hi Shubangi,
ReplyDeleteInteresting work !!
I wanted to ask about how is the translation done from MySQL schema to Hive Schema. Does that have to be done as offline process for both the systems separately, or simply creating schema in 1 system, say MySQL, will allow us to replicate data to HDFS and also put auto-generated Hive schema on top of that?
Thanks,
Ravi
Hi Ravi!
DeleteThank you for trying out the applier, and bringing this to attention!
The translation is to be done offline.
You need to create similar schema's in 'both'; MySQL as well as Hive.
Creating schema in MySQL(DDL operation) will not generate a schema in Hive automatically; it requires a manual effort. Please note replication of DDL statements is not supported in this release.
Aso, the schema equivalence is required before starting the real time replication from MySQL to Hive.
Please reply on the thread in case you have any other questions.
Hope this helps,
Shubhangi
Please could you confirm that 'mixed' binlog format is not supported at this point?
ReplyDeleteThanks
Hi!
DeleteYes, at this point, only row format is supported by the Applier. Mixed mode is not completely supported, i.e. in this mode, only the inserts which are mapped as (table map+row events) in MySQL will be replicated.
Thank you for the question. Can I please request for a use case where this is a requirement? It can help us shape the future releases of the Applier.
Thank you,
Shubhangi
Shubhangi,
ReplyDeleteGreat idea !
You are filling another gap (the real-time integration) between RDBMS entre Hadoop.
ADB
Shubhangi,
ReplyDeleteI've cmake successfully, but run make failed,
/opt/mysql-hadoop-applier-0.1.0/src/value.cpp: In function ?.int32_t mysql::calc_field_size(unsigned char, const unsigned char*, uint32_t)?.
/opt/mysql-hadoop-applier-0.1.0/src/value.cpp:151: error: ?.YSQL_TYPE_TIME2?.was not declared in this scope
/opt/mysql-hadoop-applier-0.1.0/src/value.cpp:157: error: ?.YSQL_TYPE_TIMESTAMP2?.was not declared in this scope
/opt/mysql-hadoop-applier-0.1.0/src/value.cpp:163: error: ?.YSQL_TYPE_DATETIME2?.was not declared in this scope
make[2]: *** [src/CMakeFiles/replication_static.dir/value.cpp.o] Error 1
make[1]: *** [src/CMakeFiles/replication_static.dir/all] Error 2
make: *** [all] Error 2
Thanks
Hi!
DeleteThank you for trying the Applier!
Sorry, the issue with the compilation is because you are using the libmysqlclient library released with MySQL-5.5.
Since the data types MYSQL_TYPE_TIME2, MYSQL_TYPE_TIMESTAMP2, MYSQL_TYPE_DATE_TIME2 are defined for the latest release of MySQL (5.6 GA)only, 'make' command fails.
This is a bug, and has been reported.
In order to compile, I suggest to please use either the latest released version of MySQL source code (5.6), or the latest GA release of connector C library(libmysqlclient.so -6.1.1).
You can get a copy of the connector here.
Hope it helps.
Please reply on the thread if you are still facing issues!
Regards,
Shubhangi
Shubhangi,
DeleteThank you, I've successfully compiled out the applier after changed MySQL to 5.6. But run applier comes out an error says that "Can't connect to the master.",
[root@localhost mysql2hdfs]# ./happlier --field-delimiter=, mysql://root@127.0.0.1:3306 hdfs://localhost:9000
The default data warehouse directory in HDFS will be set to /usr/hive/warehouse
Change the default data warehouse directory? (Y or N) N
Connected to HDFS file system
The data warehouse directory is set as /user/hive/warehouse
Can't connect to the master.
Thanks,
Chuanliang
Hi Chuanling,
DeleteThe above error means that the applier is not able to connect to the MySQL master. It can be due to two reasons:
- MySQL server is not started on port 3306 on localhost
- You nay not have the permissions to connect to the master as a root user.
To be sure, could you try opening a seprarate mysql client, and connect to it using the same params, i.e. user=root, host=localhost, port=3306?
Hope it Helps.
Shubhangi
Hi Shubhangi,
DeleteI tried this command,
mysql -uroot -hlocalhost -P3306
It can enter in MySQL
Thanks,
Chuanliang
Hi Chuanliang,
DeleteThat is good, my above suspicions were wrong. Sorry about that.
Can you please check the following for MySQL server:
- binlog_checksum is set to NONE
- Start the server with the cmd line option --binlog_checksum=NONE
- Logging into binary log is enabled:
- Start the server with the cmd line option --log-bin=binlog-name
Also, specify --binlog_format=ROW in order to replicate into HDFS.
Thank you,
Shubhangi
Hi Shubhangi,
DeleteYes,I run mtr rpl suite as you've written in this post, data can be replicated to Hive. But his is a MySQL test run, in order to make the real server run:
1.I config these options in /etc/my.cnf like this,
[mysqld]
log-bin=mysqlbin-log
binlog_format=ROW
binlog_checksum=NONE
2. "service mysql start" start mysql server
3. It can produce binlog file under mysql/data/
4. But when I use applier,
./happlier --field-delimiter=, mysql://root@127.0.0.1:3306 hdfs://localhost:9000
errors occur:
[root@localhost mysql2hdfs]# ./happlier --field-delimiter=, mysql://root@127.0.0.1:3306 hdfs://localhost:9000
The default data warehouse directory in HDFS will be set to /usr/hive/warehouse
Change the default data warehouse directory? (Y or N) N
Connected to HDFS file system
The data warehouse directory is set as /user/hive/warehouse
#
# A fatal error has been detected by the Java Runtime Environment:
#
# SIGSEGV (0xb) at pc=0x00007f1ed83b647d, pid=17282, tid=139770481727264
#
# JRE version: 6.0_31-b04
# Java VM: Java HotSpot(TM) 64-Bit Server VM (20.6-b01 mixed mode linux-amd64 compressed oops)
# Problematic frame:
# C [libstdc++.so.6+0x9c47d] std::string::compare(std::string const&) const+0xd
#
# An error report file with more information is saved as:
# /opt/mysql-hadoop-applier-0.1.0/examples/mysql2hdfs/hs_err_pid17282.log
#
# If you would like to submit a bug report, please visit:
# http://java.sun.com/webapps/bugreport/crash.jsp
# The crash happened outside the Java Virtual Machine in native code.
# See problematic frame for where to report the bug.
#
Aborted
Regards,
Chualiang
Hi Chualiang!
DeleteGood that you can run it using mtr. :)
The problem is that the MySQL master is mis-configured, since you do not provide server-id in the cnf file. Please add that too in the conf file.
The file /etc/my.cnf should contain at least
[mysqld]
log-bin=mysqlbin-log
binlog_format=ROW
binlog_checksum=NONE
server-id=2 #please note that this can be anything other than 1, since applier uses 1 to act as a slave (code in src/tcp_driver.cpp), so MySQL server cannot have the same id.
port=3306
Hope it helps.
Regards,
Shubhangi
Hi Shubhangi,
DeleteThank you for your reply, with your suggestion the error is gone. :)
And I have some other questions:
1. How can Applier connect to a server with password?
2. If I want to collect data from more than one MySQL Server in the same time, how can I implement it with Applier? To write a shell script to set up many Applier connections together? Can you give me some advice?
Regards,
Chuanliang
Hi Chuanliang,
DeleteGreat!
Please find the answers below:
1. You need to pass the MySQL uri to the Applier in the following format user[:password]@host[:port]
For example: ./happlier user:password@localhost:13000
2. Yes, that is possible. However, one instance of the applier can connect to only one MySQL server at a time. In order to collect data from multiple servers, you need to start multiple instances of the applier ( you can use the same executable happlier simultaneously for all the connections).
Yes, you may write a shell script to start a pair of MySQL server and the applier, for collecting data from all of them).
Also, I find it very interesting to improve the applier in order that a single instance can connect to more than one server at a time. We might consider this for the future releases. Thank you for the idea. If I may ask, can you please provide the use case where you require this implementation?
Thank you,
Shubhangi
Hi Shubhangi,
DeleteWe are a game company that operates many mobile and internet games. Most of the games use MySQL as database. Data produced by games and players are daily growing. Game operation department needs to know information of games then make marketing decisions.
In order to store and analyze the huge amount of data. We used Hadoop. First we used Sqoop to collect and import data from multiple MySQL server. And developed a system to manage all collecting tasks, like create tasks via plan, view the process and report. However, in order not to affect the running of game, collection always running at night. So when we get the status information about the games. There is so much delay. Then I found Applier, I think the real time data replicate manner is great, so I want to replace our collecting with Applier.
This is our use case. :)
I'm looking forward to see applier's future releases. And If the applier can support connecting multiple servers in a single instance. maybe you can also provide a tool to manage and control the process.
Thank you,
Chuanliang
Hi Chuanliang!
DeleteThanks a lot for the wonderful explanation. The Applier is aimed to solve issues with exactly such delays involved in the operation.
It is a very valid use case for the Applier, and I can clearly mark out the requirement of supporting data feed from multiple servers. Thanks once again, this will help us decide on the future road map for the Applier.
Please stay tuned for the updates on the product, and provide us with any other feedbacks you have.
Regards,
Shubhangi
Hi Shubhangi,
DeleteThank you too for providing us such an amazing product. If you have updates for the product, I'll be very pleased to try it. My E-mail, lichuanliang@126.com.
Regards,
Chuanliang
Chuanliang
Hi when I'm executing the cmake command at step 6 I'm getting the following error. Please advice as I'm new to applier
ReplyDeleteWarning: Source file "/home/thilanj/hadoop/mysql-hadoop-applier-0.1.0/src/basic_transaction_parser.cpp" is listed multiple times for target "replication_shared".
Hi Archfiend!
DeleteThank you for trying out the Applier!
The warning is generated because of the inclusion of the name "basic_transaction_parser.cpp" twice in the cmake file while setting the targets for the library.( code: mysql-hadoop-applier-0.1.0/src/CMakeLists.txt : line no. 5 and line no. 7)
This is our fault, thank you for pointing this out. This will be fixed in the next release.
For now, to fix it I request you to do the following:
-please modify the file mysql-hadoop-applier-0.1.0/src/CMakeLists.txt to remove any one of the two names (i.e. basic_transaction_parser.cpp , either from line no.7 or line no. 5)
-execute rm CMakeCache.txt from the base dir (/home/thilanj/hadoop/mysql-hadoop-applier-0.1.0), if it exists
- run cmake again.
Thank you once again. Please let me know if it works out for you.
Regards,
Shubhangi
Hi Shubhangi,
DeleteThanks for the fast reply. On a separate note, I'm using hortonworks hadoop. Is this compatible with it?
Regards,
Archfiend
Cont.
DeleteI've been trying to execute "make -j8" command as the video tutorial. But getting the following set of errors, the files mentioned in the error log are already there but still getting this error. Please advice
Error Log:
Scanning dependencies of target replication_shared
Scanning dependencies of target replication_static
[ 3%] [ 7%] [ 10%] [ 14%] [ 17%] [ 21%] [ 25%] [ 28%] Building CXX object src/CMakeFiles/replication_shared.dir/access_method_factory.cpp.o
Building CXX object src/CMakeFiles/replication_shared.dir/field_iterator.cpp.o
Building CXX object src/CMakeFiles/replication_static.dir/access_method_factory.cpp.o
Building CXX object src/CMakeFiles/replication_shared.dir/row_of_fields.cpp.o
Building CXX object src/CMakeFiles/replication_static.dir/field_iterator.cpp.o
Building CXX object src/CMakeFiles/replication_static.dir/row_of_fields.cpp.o
Building CXX object src/CMakeFiles/replication_shared.dir/basic_transaction_parser.cpp.o
Building CXX object src/CMakeFiles/replication_shared.dir/binlog_driver.cpp.o
In file included from /home/thilanj/hadoop/mysql-hadoop-applier-0.1.0/include/binlog_driver.h:25,
from /home/thilanj/hadoop/mysql-hadoop-applier-0.1.0/include/access_method_factory.h:24,
from /home/thilanj/hadoop/mysql-hadoop-applier-0.1.0/src/access_method_factory.cpp:20:
/home/thilanj/hadoop/mysql-hadoop-applier-0.1.0/include/protocol.h:24:23: error: my_global.h: No such file or directory
In file included from /home/thilanj/hadoop/mysql-hadoop-applier-0.1.0/include/binlog_driver.h:25,
from /home/thilanj/hadoop/mysql-hadoop-applier-0.1.0/include/access_method_factory.h:24,
from /home/thilanj/hadoop/mysql-hadoop-applier-0.1.0/src/access_method_factory.cpp:20:
/home/thilanj/hadoop/mysql-hadoop-applier-0.1.0/include/protocol.h:24:23: error: my_global.h: No such file or directory
/home/thilanj/hadoop/mysql-hadoop-applier-0.1.0/include/protocol.h:25:19: error: mysql.h: No such file or directory
/home/thilanj/hadoop/mysql-hadoop-applier-0.1.0/include/protocol.h:26:21: error: m_ctype.h: No such file or directory
/home/thilanj/hadoop/mysql-hadoop-applier-0.1.0/include/protocol.h:27:24: error: sql_common.h: No such file or directory
In file included from /home/thilanj/hadoop/mysql-hadoop-applier-0.1.0/include/value.h:24,
from /home/thilanj/hadoop/mysql-hadoop-applier-0.1.0/include/field_iterator.h:25,
from /home/thilanj/hadoop/mysql-hadoop-applier-0.1.0/src/field_iterator.cpp:20:
Hi Archfiend!
DeleteCan you please mention how are you addressing the dependency on libmysqlclient- using MySQL source code, or the connector/C library directly?
Please make sure of the following:
If you are using MySQL source code for the mysqlclient library, make sure
1. The MySQL source code is built (i.e. run the cmake and make command on MySQL source code)
2. Set the environment variable MYSQL_DIR to point to the base directory of this source code (please note, donot give the path upto the lib directory, but only the base directory)
3. Please check that the file 'my_global.h' is present in the path $MYSQL_DIR/include and the library libmysqlclient.so in $MYSQL_DIR/lib
4. Delete CMakeCache.txt from the Hadoop Applier base directory
(rm CmakeCache.txt)
5. Run cmake and make again.
If you are using the library directly, make sure that you have the following:
If not explicitly specified,
1. The above mentioned files (my_global.h) must be in the standard header paths where the compiler looks for. (/usr/lib)
2. The library should be in the standard library paths
3. rm CMakeCache.txt
4. cmake and make again
Hope it helps!
Please reply in case if you still face issues.
Thank you,
Shubhangi
Have you considered replication to HBase, utilizing the versioning capability (http://hbase.apache.org/book.html#versions) to allow a high fidelity history to be maintained to support time series analysis etc?
ReplyDeleteHi, when I install with "make happlier", I see error like :
ReplyDeletehadoop-2.2.0/lib/native/libhdfs.so: could not read symbols: File in wrong format
How fix it ?
Hi,
DeleteThank you for trying out the applier!
I am not sure, but looks like the linker error is because the library version may incompatible while compiling the happlier.
Can you please make sure that the library is compiled for the type 32 bit (or 64 bit), same as the rest of your object files, i.e. the happlier and libreplication.so?
Please contiue the thread if you are still facing the issues.
Thanks,
Shubhangi
This comment has been removed by the author.
ReplyDeleteDear Shubhangi,
ReplyDeleteWhen I run the command "make happiler", I see the result:
[ 77%] Built target replication_static
Scanning dependencies of target happlier
[ 83%] Building CXX object examples/mysql2hdfs/CMakeFiles/happlier.dir/mysql2hdfs.cpp.o
[ 88%] Building CXX object examples/mysql2hdfs/CMakeFiles/happlier.dir/table_index.cpp.o
[ 94%] Building CXX object examples/mysql2hdfs/CMakeFiles/happlier.dir/hdfs_schema.cpp.o
[100%] Building CXX object examples/mysql2hdfs/CMakeFiles/happlier.dir/table_insert.cpp.o
Linking CXX executable happlier
/usr/lib/jvm/java-6-openjdk/jre/lib/amd64/libjawt.so: undefined reference to `awt_Unlock@SUNWprivate_1.1'
/usr/lib/jvm/java-6-openjdk/jre/lib/amd64/libjawt.so: undefined reference to `awt_GetComponent@SUNWprivate_1.1'
/usr/lib/jvm/java-6-openjdk/jre/lib/amd64/libjawt.so: undefined reference to `awt_Lock@SUNWprivate_1.1'
/usr/lib/jvm/java-6-openjdk/jre/lib/amd64/libjawt.so: undefined reference to `awt_GetDrawingSurface@SUNWprivate_1.1'
/usr/lib/jvm/java-6-openjdk/jre/lib/amd64/libjawt.so: undefined reference to `awt_FreeDrawingSurface@SUNWprivate_1.1'
collect2: ld returned 1 exit status
make[3]: *** [examples/mysql2hdfs/happlier] Error 1
make[2]: *** [examples/mysql2hdfs/CMakeFiles/happlier.dir/all] Error 2
make[1]: *** [examples/mysql2hdfs/CMakeFiles/happlier.dir/rule] Error 2
make: *** [happlier] Error 2
Can you please tell me how to fix that error?
Thanks,
Sidus
I had the same issue building this on a headless linux VM. To resolve, I manually linked $JAVA_HOME/jre/lib//xawt/libmawt.so to $JAVA_HOME/jre/lib/libmawt.so
Deletethen installed Xvfb and Xtst. Then it built fine.
Hi Sidus,
ReplyDeleteThank you for trying out the Applier.
As I see it, the problems is while linking to libjawt libraries.
Can you please make sure of the following:
1. Do you have the JAVA_HOME set ?
2. Do you have CLASS_PATH set to point to jars required to run Hadoop itself?
(command ~: export CLASSPATH= $(hadoop classpath) )
3. Can you please try running Hadoop and check if it runs fine?
Hope that helps.
Please reply in case it doesn't solve the issue.
Hi,
DeleteMy JAVA_HOME is currently set to /usr/lib/jvm/java-6-openjdk
And my CLASSPATH is:
/opt/cnn/hadoop/hadoop-1.2.1/libexec/../conf:/usr/lib/jvm/java-6-openjdk/lib/tools.jar:/opt/cnn/hadoop/hadoop-1.2.1/libexec/..:/opt/cnn/hadoop/hadoop-1.2.1/libexec/../hadoop-core-1.2.1.jar:/opt/cnn/hadoop/hadoop-1.2.1/libexec/../lib/asm-3.2.jar:/opt/cnn/hadoop/hadoop-1.2.1/libexec/../lib/aspectjrt-1.6.11.jar:/opt/cnn/hadoop/hadoop-1.2.1/libexec/../lib/aspectjtools-1.6.11.jar:/opt/cnn/hadoop/hadoop-1.2.1/libexec/../lib/commons-beanutils-1.7.0.jar:/opt/cnn/hadoop/hadoop-1.2.1/libexec/../lib/commons-beanutils-core-1.8.0.jar:/opt/cnn/hadoop/hadoop-1.2.1/libexec/../lib/commons-cli-1.2.jar:/opt/cnn/hadoop/hadoop-1.2.1/libexec/../lib/commons-codec-1.4.jar:/opt/cnn/hadoop/hadoop-1.2.1/libexec/../lib/commons-collections-3.2.1.jar:/opt/cnn/hadoop/hadoop-1.2.1/libexec/../lib/commons-configuration-1.6.jar:/opt/cnn/hadoop/hadoop-1.2.1/libexec/../lib/commons-daemon-1.0.1.jar:/opt/cnn/hadoop/hadoop-1.2.1/libexec/../lib/commons-digester-1.8.jar:/opt/cnn/hadoop/hadoop-1.2.1/libexec/../lib/commons-el-1.0.jar:/opt/cnn/hadoop/hadoop-1.2.1/libexec/../lib/commons-httpclient-3.0.1.jar:/opt/cnn/hadoop/hadoop-1.2.1/libexec/../lib/commons-io-2.1.jar:/opt/cnn/hadoop/hadoop-1.2.1/libexec/../lib/commons-lang-2.4.jar:/opt/cnn/hadoop/hadoop-1.2.1/libexec/../lib/commons-logging-1.1.1.jar:/opt/cnn/hadoop/hadoop-1.2.1/libexec/../lib/commons-logging-api-1.0.4.jar:/opt/cnn/hadoop/hadoop-1.2.1/libexec/../lib/commons-math-2.1.jar:/opt/cnn/hadoop/hadoop-1.2.1/libexec/../lib/commons-net-3.1.jar:/opt/cnn/hadoop/hadoop-1.2.1/libexec/../lib/core-3.1.1.jar:/opt/cnn/hadoop/hadoop-1.2.1/libexec/../lib/hadoop-capacity-scheduler-1.2.1.jar:/opt/cnn/hadoop/hadoop-1.2.1/libexec/../lib/hadoop-fairscheduler-1.2.1.jar:/opt/cnn/hadoop/hadoop-1.2.1/libexec/../lib/hadoop-thriftfs-1.2.1.jar:/opt/cnn/hadoop/hadoop-1.2.1/libexec/../lib/hsqldb-1.8.0.10.jar:/opt/cnn/hadoop/hadoop-1.2.1/libexec/../lib/jackson-core-asl-1.8.8.jar:/opt/cnn/hadoop/hadoop-1.2.1/libexec/../lib/jackson-mapper-asl-1.8.8.jar:/opt/cnn/hadoop/hadoop-1.2.1/libexec/../lib/jasper-compiler-5.5.12.jar:/opt/cnn/hadoop/hadoop-1.2.1/libexec/../lib/jasper-runtime-5.5.12.jar:/opt/cnn/hadoop/hadoop-1.2.1/libexec/../lib/jdeb-0.8.jar:/opt/cnn/hadoop/hadoop-1.2.1/libexec/../lib/jersey-core-1.8.jar:/opt/cnn/hadoop/hadoop-1.2.1/libexec/../lib/jersey-json-1.8.jar:/opt/cnn/hadoop/hadoop-1.2.1/libexec/../lib/jersey-server-1.8.jar:/opt/cnn/hadoop/hadoop-1.2.1/libexec/../lib/jets3t-0.6.1.jar:/opt/cnn/hadoop/hadoop-1.2.1/libexec/../lib/jetty-6.1.26.jar:/opt/cnn/hadoop/hadoop-1.2.1/libexec/../lib/jetty-util-6.1.26.jar:/opt/cnn/hadoop/hadoop-1.2.1/libexec/../lib/jsch-0.1.42.jar:/opt/cnn/hadoop/hadoop-1.2.1/libexec/../lib/junit-4.5.jar:/opt/cnn/hadoop/hadoop-1.2.1/libexec/../lib/kfs-0.2.2.jar:/opt/cnn/hadoop/hadoop-1.2.1/libexec/../lib/log4j-1.2.15.jar:/opt/cnn/hadoop/hadoop-1.2.1/libexec/../lib/mockito-all-1.8.5.jar:/opt/cnn/hadoop/hadoop-1.2.1/libexec/../lib/oro-2.0.8.jar:/opt/cnn/hadoop/hadoop-1.2.1/libexec/../lib/servlet-api-2.5-20081211.jar:/opt/cnn/hadoop/hadoop-1.2.1/libexec/../lib/slf4j-api-1.4.3.jar:/opt/cnn/hadoop/hadoop-1.2.1/libexec/../lib/slf4j-log4j12-1.4.3.jar:/opt/cnn/hadoop/hadoop-1.2.1/libexec/../lib/xmlenc-0.52.jar:/opt/cnn/hadoop/hadoop-1.2.1/libexec/../lib/jsp-2.1/jsp-2.1.jar:/opt/cnn/hadoop/hadoop-1.2.1/libexec/../lib/jsp-2.1/jsp-api-2.1.jar:/opt/cnn/sqoop/current/lib
My Hadoop is running fine. My hadoop version is 1.2.1, is there any problem with it?
Hi Sidus,
ReplyDeleteThe paths look correct.
I am not sure whether there are issues with Hdoop-1.2.1, I have not tested it yet.
May be installing Oracle JDK ( I use 1.7.0_03) instead of openJDK would help.
Thank you,
Shubhangi
Greetings!
ReplyDeleteFirst, thanks for the very interesting code.
This will be very useful.
At the moment there are some problems, one of which is that I found a case where it is skipping the wrong table. The issue seems to be that mysqld can change the numerical table_id associated with a given table (I think that in my particular case this was associated with a restart of mysqld and a re-reading of the logs from before the restart). Anyway, looking at the code from Table_index::process_event, which processes the TABLE_MAP_EVENTs, two issues arise:
1) If the table_id associated with the map event is already registered, the code ignores the update. Should the update cause the map to, well, update?
2) Is there a memory leak? Who deletes table map event objects.
For reference, the code is quoted below.
Thanks!
-Mike
mysql::Binary_log_event *Table_index::process_event(mysql::Table_map_event *tm)
{
if (find(tm->table_id) == end())
insert(Event_index_element(tm->table_id,tm));
/* Consume this event so it won't be deallocated beneath our feet */
return 0;
}
Hi Mike!
DeleteThank you for trying out the applier! It is very encouraging to see you take interest in the code base. This shall make us overcome the shortcomings faster.
Regarding the question:
1. Only committed transactions are written into the binary log. If the server restarts, we don't apply binary logs. Therefore, the table-id will not change, or updated, for a map event written in the binary log.
2. Yes, there is a memory leak in the code here. This is a bug, thank you for pointing it out.
You may report it on http://bugs.mysql.com , under the category 'Server: Binlog'. Or, I can do it on your behalf. Once it is reported, we will be able to commit a patch for the same.
Thank you once again.
Shubhangi
Greetings!
ReplyDeleteAgain, thank you for the very interesting and promising software.
I am not an expert on MySQL, but I've been reading the documentation and the source code you provided.
Looking at this URL:
http://www.mysqlperformanceblog.com/2013/07/15/crash-resistant-replication-how-to-avoid-mysql-replication-errors/
It apparently is the responsibility of the replication server to keep track of where it is in the bin-logs. Otherwise, if the replication server is restarted, it will begin reading as far back as it possibly can.
The overloads to Binary_log_driver::connect seem to provision for this, but this feature does not seem to be used in the example code.
Am I overlooking something, or might this be a future enhancement?
Thank you.
Sincerely,
Mike Albert
Hi Mike,
DeleteI am sorry about the delay in the reply.
Thank you for trying out the Applier and looking into the code base as well as the documentation, it will be very helpful to improve the Applier!
Yes, it is the responsibility of the replication server to keep track of where it is in the bin-logs. As you mention correctly, if not kept track, the server, if restarted will begin reading from the start.
The Applier currently suffers from this issue. If restarted, the Applier reads again from the first log. This is a feature enhancement, and should be addresssed. Thank you once again for pointing this out!
You may please feel free to report it on http://bugs.mysql.com , under the category 'Server: Binlog', marking the severity as 4(feature request). Or, I can do it on your behalf. Once it is reported, we will be able to commit a patch for the same.
Thank you once again.
Regards,
Shubhangi
Greetings,
ReplyDeletewhen I run "make" command i get the following error:
mysql-hadoop-applier-0.1.0/src/tcp_driver.cpp:41:25: fatal error: openssl/evp.h: No such file or directory
compilation terminated.
make[2]: *** [src/CMakeFiles/replication_static.dir/tcp_driver.cpp.o] Error 1
make[1]: *** [src/CMakeFiles/replication_static.dir/all] Error 2
make: *** [all] Error 2
I have tryed installing libssl-dev with no luck (some people fixed similar problems with this lib)
Can you help me with this error.
Thank you.
Carlos
Hi,
ReplyDeleteIMHO installing libssl-dev should solve the above problem, but if its not working, then you can just comment those two lines in tcp_driver.cpp
#include < openssl/evp.h >
#include < openssl/rand.h >
And try compiling your code, these header files were being used before but now its not required, it will be removed from the code in the next release.
Thanks for the answer.
DeleteI solved the problem reinstalling libssl-dev.
But now i'm stuck at the make happlier. I get this error:
Linking CXX executable happlier
CMakeFiles/happlier.dir/hdfs_schema.cpp.o: In function `HDFSSchema::HDFSSchema(std::basic_string, std::allocator > const&, int, std::basic_string, std::allocator > const&, std::basic_string, std::allocator > const&)':
hdfs_schema.cpp:(.text+0xa0): undefined reference to `hdfsConnect'
hdfs_schema.cpp:(.text+0xdb): undefined reference to `hdfsConnectAsUser'
CMakeFiles/happlier.dir/hdfs_schema.cpp.o: In function `HDFSSchema::~HDFSSchema()':
hdfs_schema.cpp:(.text+0x2d0): undefined reference to `hdfsDisconnect'
CMakeFiles/happlier.dir/hdfs_schema.cpp.o: In function `HDFSSchema::HDFS_data_insert(std::basic_string, std::allocator > const&, char const*)':
hdfs_schema.cpp:(.text+0x4a3): undefined reference to `hdfsSetWorkingDirectory'
hdfs_schema.cpp:(.text+0x524): undefined reference to `hdfsExists'
hdfs_schema.cpp:(.text+0x55a): undefined reference to `hdfsOpenFile'
hdfs_schema.cpp:(.text+0x58d): undefined reference to `hdfsOpenFile'
hdfs_schema.cpp:(.text+0x5fc): undefined reference to `hdfsWrite'
hdfs_schema.cpp:(.text+0x680): undefined reference to `hdfsFlush'
hdfs_schema.cpp:(.text+0x6d5): undefined reference to `hdfsCloseFile'
collect2: ld returned 1 exit status
make[3]: *** [examples/mysql2hdfs/happlier] Error 1
make[2]: *** [examples/mysql2hdfs/CMakeFiles/happlier.dir/all] Error 2
make[1]: *** [examples/mysql2hdfs/CMakeFiles/happlier.dir/rule] Error 2
make: *** [happlier] Error 2
Any idea what could be the problem?
Thanks
Carlos
Hi Carlos,
DeleteThank you for trying the applier.
From the errors, it seems that the applier is not able find the shared library, 'libhdfs.so'.
Which Hadoop version are you using? The library comes pre compiled for 32 bit systems with Hadoop, but you need to compile it for 64 bits.
You may please try locating libhdfs.so on your system (inside HADOOP_HOME) and make sure the path to it is in LD_LIBRARY_PATH.
You may also check the contents of the file CMakeCache.txt, to see at what location is the applier trying to search the library.
Hope that helps.
Thank you,
Shubhangi
Hi,
ReplyDeleteGreat Work! could you plz answer these!
Is Applier works on Hadoop 2.2.X?
By when Applier will support updates and deletes?
By when Applier will be ready for production systems?
Hi Murali,
ReplyDeleteThanks for trying the applier.
1) Yes it works with Hadoop 2.2.X, but you might need to change the library and include path in FindHDFS.cmake file.
2) We have considered adding update and delete, but there are no concrete plans yet.
3) I am sorry but we have not decided on that yet.
# A fatal error has been detected by the Java Runtime Environment:
ReplyDelete#
# SIGSEGV (0xb) at pc=0x0000003c6909c47d, pid=4932, tid=140660958218016
#
# JRE version: OpenJDK Runtime Environment (7.0_55-b13) (build 1.7.0_55-mockbuild_2014_04_16_12_11-b00)
# Java VM: OpenJDK 64-Bit Server VM (24.51-b03 mixed mode linux-amd64 compressed oops)
# Problematic frame:
# C [libstdc++.so.6+0x9c47d] std::string::compare(std::string const&) const+0xd
To resolve it i had to change the log-bin configuration in my.cnf
from
log-bin=mysql-bin
to
log-bin=mysqlbin-log
Hi Shubhangi,
ReplyDeleteBUG 71277 has been fixed in v0.2.0.
Is that available now for download?
thanks
--Karan
Hi everyone,
ReplyDeleteI am trying to understand and use hadoop applier for my project. I ran through all the steps. However i am having some problems. I don't have a strong software background in general. So my apologies in advance if something seems trivial To make it easier i will list all the steps concisely here in regards to Install and configure tutorial.
Here is my system setup:
Hadoop Applier package : mysql-hadoop-applier-0.1.0
Hadoop version : 1.2.1
Java version : 1.7.0_51
libhdfs: present
cmake: 2.8
libmysqlclient: mysql-connector-c-6.1.3-linux-glibc2.5-x86_64
gcc : 4.8.2
MySQL Server: 5.6.17 ( Downloaded as source code, then cmake ,make and install.
FindHDFS.cmake: Downloaded online
FindJNI.cmake: Already present in Cmake modules
My env variables in bashrc are as follows:
# JAVA HOME directory setup
export JAVA_HOME="/usr/lib/java/jdk1.8.0_05"
set PATH="$PATH:$JAVA_HOME/bin"
export PATH
export HADOOP_HOME="/home/srai/Downloads/hadoop-1.2.1"
export PATH="$PATH:$HADOOP_HOME/bin"
#Home Directiry configuration
export HIVE_HOME="/usr/lib/hive"
export "PATH=$PATH:$HIVE_HOME/bin"
#MYSQL_DIR
export MYSQL_DIR="/usr/local/mysql"
export "PATH=$PATH:$MYSQL_DIR/bin"
export PATH
1) & 2) Hadoop is downloaded. I can run and stop all the hdfs and mapred daemons correctly. My hadoop version is 1.2.1. My $HADOOP_HOME environment variable is set in .bashrc file as "/home/srai/Downloads/hadoop-1.2.1"
3) & 4) I downloaded a FindHDFS.cmake file and modified it according to the patch which was listed. I placed this file under the following directory "/usr/share/cmake-2.8/Modules". I thought if i place this under the module directory the CMAKE_MODULE_PATH will be able to find it. I am not sure if this is correct or how do i update CMAKE_MODULE_PATH in the CMAKELists.txt and where?
5) FindJNI.cmake was already present in the directory /usr/share/cmake-2.8/Modules so i didn't change or modify it. My JAVA_HOME env variable is set in bashrc file as "/usr/lib/java/jdk1.8.0_05". I didn;t modify or touch LD_LIBRARY_PATH.
6) Downloaded hadoop applier and mysql-connector-c. Since the tutorial says " 'mysqlclient' library is required to be installed in the default library paths", i moved the files of mysqlconnector-c to /usr/lib/mysql-connector-c. I also declared a variable $MYSQL_DIR to point to "/usr/local/mysql"
I ran the cmake command on the parent directory of the hadoop applier source , however i get errors. Below is a complete log:
ReplyDeletesudo cmake . -DENABLE_DOWNLOADS=1 mysql-hadoop-applier-0.1.0
[sudo] password for srai:
-- Tests from subdirectory 'tests' added
Adding test test-basic
Adding test test-transport
CMake Warning at examples/mysql2lucene/CMakeLists.txt:3 (find_package):
By not providing "FindCLucene.cmake" in CMAKE_MODULE_PATH this project has
asked CMake to find a package configuration file provided by "CLucene", but
CMake did not find one.
Could not find a package configuration file provided by "CLucene" with any
of the following names:
CLuceneConfig.cmake
clucene-config.cmake
Add the installation prefix of "CLucene" to CMAKE_PREFIX_PATH or set
"CLucene_DIR" to a directory containing one of the above files. If
"CLucene" provides a separate development package or SDK, be sure it has
been installed.
-- Architecture: x64
-- HDFS_LIB_PATHS: /c++/Linux-amd64-64/lib
-- HDFS includes and libraries NOT found.Thrift support will be disabled (, HDFS_INCLUDE_DIR-NOTFOUND, HDFS_LIB-NOTFOUND)
-- Could NOT find JNI (missing: JAVA_AWT_LIBRARY JAVA_JVM_LIBRARY JAVA_INCLUDE_PATH JAVA_INCLUDE_PATH2 JAVA_AWT_INCLUDE_PATH)
CMake Error: The following variables are used in this project, but they are set to NOTFOUND.
Please set them or make sure they are set and tested correctly in the CMake files:
HDFS_INCLUDE_DIR (ADVANCED)
used as include directory in directory /home/srai/Downloads/mysql-hadoop-applier-0.1.0/examples/mysql2hdfs
JAVA_AWT_LIBRARY (ADVANCED)
linked by target "happlier" in directory /home/srai/Downloads/mysql-hadoop-applier-0.1.0/examples/mysql2hdfs
JAVA_JVM_LIBRARY (ADVANCED)
linked by target "happlier" in directory /home/srai/Downloads/mysql-hadoop-applier-0.1.0/examples/mysql2hdfs
-- Configuring incomplete, errors occurred!
See also "/home/srai/Downloads/CMakeFiles/CMakeOutput.log".
I don't understand how cmake is using my environment variables to find these files. As i mentioned i am a newbie so please if someone can help me compile the hadoop applier i will really appreciate it.
Thanks
Hi Suleman,
ReplyDeleteThank you for the detailed message, and thank you for trying out the applier!
Everything looks fine except for one issue.
The error is that cmake is unable to find the libraries correctly.
The HDFS_LIB_PATHS is set as "/c++/Linux-amd64-64/lib", but it should be
"/home/srai/Downloads/hadoop-1.2.1/c++/Linux-amd64-64/lib".
This implies that the variable HADOOP_HOME is not set, on the terminal where you are running cmake.
Before executing the cmake command can you run
echo $HADOOP_HOME
and see that the output is
/home/srai/Downloads/hadoop-1.2.1 ?
Hope that helps. Please notify us in case you are still having an erro.
Thank you,
Shubhangi
Hi Shubhangi,
ReplyDeleteThank you for your response.
I double checked my hadoop_home path by doing echo $HADOOP_HOME and i see my output is /home/srai/Downloads/hadoop-1.2.1.
I am not sure why $ENV{HADOOP_HOME}/src/c++/libhdfs/ does not prefix my hadoop home to this path. Can it be because CMAKE_MODULE_PATH cannot find FindHDFS.cmake and FindJNI.cmake? I put the FindHDFS.cmake in the modules under /usr/share/cmake-2.8/Modules and FindJNI.cmake was already there. Also I don't define or use LD_LIBRARY_PATH anywhere.
I modified the FindHDFS.cmake as suggested in step 4 of the tutorial. This might seem silly , however i am not sure what this means or where this modification will go :
--- a/cmake_modules/FindHDFS.cmake
+++ b/cmake_modules/FindHDFS.cmake
Also if you can elaborate a bit more on steps 7 and 8, i will really appreciate it.
Thanks,
Suleman.
Hi,
DeleteNo, cmake is able to FindHDFS.cmake and and FindJNI.cmake, hence you are getting the suffix (src/c++/libhdfs). YOu have put it in the correct place.
Not defining LD_LIBRARY_PATH is fine, it will not be a problem.
The modification
--- a/cmake_modules/FindHDFS.cmake
+++ b/cmake_modules/FindHDFS.cmake
is just an indication that you have to modify the file FindHDFS.cmake. It need not go anywhere.
Steps 7 and 8:
1. Run these two commands on the terminal:
export PATH=$HADOOP_HOME/bin:$PATH
export CLASSPATH=$(hadoop classpath)
2. Run the command 'make happlier'.
3. If the above command gives an error, modify LD_LIBRARY_PATH.
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/home/your path to the library libhdfs
4. cd /home/srai/Downloads/mysql-hadoop-applier-0.1.0/examples/mysql2hdfs
5. ./happlier mysql://root@127.0.0.1:3306 hdfs://localhost:9000
Hope that helps!
Thanks,
Shubhangi
Hi Shubhangi,
DeleteThanks for your response. I was able to run the 'cmake' command on the parent directory of the Hadoop Applier source. In my case i ran "sudo cmake . -DENABLE_DOWNLOADS=1 mysql-hadoop-applier-0.1.0 " and then "make happlier" from the terminal.
I got the following output:
-- Architecture: x64
-- HDFS_LIB_PATHS: /c++/Linux-amd64-64/lib
-- sh: 1: hadoop: not found
-- HDFS_INCLUDE_DIR: /home/srai/Downloads/hadoop-1.2.1/src/c++/libhdfs
-- HDFS_LIBS: /home/srai/Downloads/hadoop-1.2.1/c++/Linux-amd64-64/lib/libhdfs.so
-- JNI_INCLUDE_DIRS=/usr/lib/java/jdk1.8.0_05/include;/usr/lib/java/jdk1.8.0_05/include/linux;/usr/lib/java/jdk1.8.0_05/include
-- JNI_LIBRARIES=/usr/lib/java/jdk1.8.0_05/jre/lib/amd64/libjawt.so;/usr/lib/java/jdk1.8.0_05/jre/lib/amd64/server/libjvm.so
-- Configuring done
-- Generating done
-- Build files have been written to: /home/srai/Downloads
srai@ubuntu:~/Downloads$ make happlier
Built target replication_static
Built target happlier
I then do : export PATH=$HADOOP_HOME/bin:$PATH
export CLASSPATH=$(hadoop classpath)
change my working directory to /home/srai/Downloads/mysql-hadoop-applier-0.1.0/examples/mysql2hdfs
However when i try to do
./happlier --field-delimiter=, mysql://root@127.0.0.1:3306 hdfs://localhost:9000 ( taken from your video)
i get bash: ./happlier: No such file or directory
When running make happlier, it says built target happlier, so i dont know why its not being generated.Can you provide some insight?
Thanks,
Suleman.
Hi,
DeleteCan you check if happlier is at the following location: /home/srai/Downloads/examples/mysql2hdfs?
$ cd /home/srai/Downloads/examples/mysql2hdfs
$ ./happlier --field-delimiter=, mysql://root@127.0.0.1:3306 hdfs://localhost:9000
Hope that Helps,
Shubhangi
When can we expect next release of hadoop applier with above bug fixes?
ReplyDeletehi Shubhangi
ReplyDeleteI ran the cmake command , however i get errors. Below is a complete log,can give me some help:
[root@h2i1 install]# cmake . -DENABLE_DOWNLOADS=1 mysql-hadoop-applier-0.1.0
-- Tests from subdirectory 'tests' added
Adding test test-basic
Adding test test-transport
CMake Warning at examples/mysql2lucene/CMakeLists.txt:3 (find_package):
By not providing "FindCLucene.cmake" in CMAKE_MODULE_PATH this project has
asked CMake to find a package configuration file provided by "CLucene", but
CMake did not find one.
Could not find a package configuration file provided by "CLucene" with any
of the following names:
CLuceneConfig.cmake
clucene-config.cmake
Add the installation prefix of "CLucene" to CMAKE_PREFIX_PATH or set
"CLucene_DIR" to a directory containing one of the above files. If
"CLucene" provides a separate development package or SDK, be sure it has
been installed.
CMake Warning at examples/mysql2hdfs/CMakeLists.txt:3 (find_package):
By not providing "FindHDFS.cmake" in CMAKE_MODULE_PATH this project has
asked CMake to find a package configuration file provided by "HDFS", but
CMake did not find one.
Could not find a package configuration file provided by "HDFS" with any of
the following names:
HDFSConfig.cmake
hdfs-config.cmake
Add the installation prefix of "HDFS" to CMAKE_PREFIX_PATH or set
"HDFS_DIR" to a directory containing one of the above files. If "HDFS"
provides a separate development package or SDK, be sure it has been
installed.
-- Could NOT find JNI (missing: JAVA_AWT_LIBRARY JAVA_JVM_LIBRARY)
CMake Error: The following variables are used in this project, but they are set to NOTFOUND.
Please set them or make sure they are set and tested correctly in the CMake files:
JAVA_AWT_LIBRARY (ADVANCED)
linked by target "happlier" in directory /apps/abm/install/mysql-hadoop-applier-0.1.0/examples/mysql2hdfs
JAVA_JVM_LIBRARY (ADVANCED)
linked by target "happlier" in directory /apps/abm/install/mysql-hadoop-applier-0.1.0/examples/mysql2hdfs
-- Configuring incomplete, errors occurred!
*****************
this is my environment:
export HADOOP_HOME=/usr/lib/hadoop
export CMAKE_MODULE_PATH=/apps/abm/install/FindHDFS.cmake:/apps/abm/install/cmake-2.8.8/Modules/FindJNI.cmake:/usr/share/kde4/apps/cmake/modules/FindCLucene.cmake
export JAVA_HOME=/apps/abm/install/jdk1.8.0_20
export PATH=/apps/abm/install/jdk1.8.0_20/bin:/apps/abm/svr/mysql5/bin:$HADOOP_HOME/bin:$PATH
export CLASSPATH=$HADOOP_HOME/lib
export MYSQL_DIR=/apps/abm/svr/mysql5.6
export HADOOP_HOME_WARN_SUPPRESS=1
export CMAKE_PREFIX_PATH=/var/lib/ambari-agent/cache/stacks/HDP/2.0.6.GlusterFS/services/HDFS
Hi Muthu,
ReplyDeleteI am sorry but we can not comment about the release dates in advance.
Please let us know if you have any more questions.
Thanks & Regards,
Neha
Hi Rushm,
ReplyDeleteLooking at the error that you have pasted here, I am guessing that the file FindHDFS.cmake was not found at the locations given in CMAKE_MODULE_PATH. Can you please check that once more, and let me know if the error persists.
Regards,
Neha
Hadoop is really a good booming technology in coming days. And good path for the people who are looking for the Good Hikes and long career. We also provide Hadoop online training
ReplyDeleteThis comment has been removed by the author.
ReplyDeleteHi,
ReplyDeleteThe project interested us at our oragninzation, but we wanted a production ready solution with support for updates and deletes.
Can you please update me on this?
KR.
Hi Dinesh,
DeleteThanks for showing interest in our project. I am sorry but we can not promise about the dates, we will keep you updated when we have something production ready.
Regards,
Neha
Hello Subhangi,
ReplyDeleteWhen I tried to MAKE and install . I got the following errors
-- Architecture: x64
-- HDFS_LIB_PATHS: /usr/local/hadoop/lib/native
-- Hadoop 2.6.0
Subversion https://git-wip-us.apache.org/repos/asf/hadoop.git -r e3496499ecb8d220fba99dc5ed4c99c8f9e33bb1
Compiled by jenkins on 2014-11-13T21:10Z
Compiled with protoc 2.5.0
From source with checksum 18e43357c8f927c0695f1e9522859d6a
This command was run using /usr/local/hadoop/share/hadoop/common/hadoop-common-2.6.0.jar
-- HDFS_INCLUDE_DIR: /usr/local/hadoop/include
-- HDFS_LIBS: /usr/local/hadoop/lib/native/libhdfs.so
-- Found JNI: /usr/lib/jvm/java-7-openjdk-amd64/jre/lib/amd64/libjawt.so
-- JNI_INCLUDE_DIRS=/usr/lib/jvm/java-7-openjdk-amd64/include;/usr/lib/jvm/java-7-openjdk-amd64/include;/usr/lib/jvm/java-7-openjdk-amd64/include
-- JNI_LIBRARIES=/usr/lib/jvm/java-7-openjdk-amd64/jre/lib/amd64/libjawt.so;/usr/lib/jvm/java-7-openjdk-amd64/jre/lib/amd64/server/libjvm.so
-- Configuring done
-- Generating done
-- Build files have been written to: /usr/local/mysql-hadoop-applier-0.1.0
root@DWNPCPU004:/usr/local/mysql-hadoop-applier-0.1.0# make
[ 3%] Building CXX object src/CMakeFiles/replication_shared.dir/access_method_factory.cpp.o
In file included from /usr/local/mysql-hadoop-applier-0.1.0/include/binlog_driver.h:25:0,
from /usr/local/mysql-hadoop-applier-0.1.0/include/access_method_factory.h:24,
from /usr/local/mysql-hadoop-applier-0.1.0/src/access_method_factory.cpp:20:
/usr/local/mysql-hadoop-applier-0.1.0/include/protocol.h:24:23: fatal error: my_global.h: No such file or directory
#include
^
compilation terminated.
make[2]: *** [src/CMakeFiles/replication_shared.dir/access_method_factory.cpp.o] Error 1
make[1]: *** [src/CMakeFiles/replication_shared.dir/all] Error 2
make: *** [all] Error 2
Can you please help me ?
Thanks
RajeshP
Hi Rajesh,
ReplyDeleteIt seems that MYSQL_DIR is not pointing to the correct place. Are you using mysql source code or mysql binaries? Once these questions are answered we can find the problem faster.
Regards,
Neha
Thank you Neha for your quick reply .
DeleteI have downloaded the tar file from the dev.mysql.com
mysql-5.6.23-linux-glibc2.5-x86_64.tar.gz
and pointed the MYSQL_DIR=/usr/local/mysql_5.6/mysql_5.6
Thanks,
Hi Rajesh,
DeleteSo you downloaded the tar file, compiled the source code and installed it correct?
can you paste the output when you do cmake.
Regards,
Neha
Hi Neha,
DeleteHere is the output when i cmake .
-- Architecture: x64
-- HDFS_LIB_PATHS: /lib/native
-- Hadoop 2.6.0
Subversion https://git-wip-us.apache.org/repos/asf/hadoop.git -r e3496499ecb8d220fba99dc5ed4c99c8f9e33bb1
Compiled by jenkins on 2014-11-13T21:10Z
Compiled with protoc 2.5.0
From source with checksum 18e43357c8f927c0695f1e9522859d6a
This command was run using /usr/local/hadoop/share/hadoop/common/hadoop-common-2.6.0.jar
-- HDFS_INCLUDE_DIR: /usr/local/hadoop/include
-- HDFS_LIBS: /usr/local/hadoop/lib/native/libhdfs.so
-- JNI_INCLUDE_DIRS=/usr/lib/jvm/java-7-openjdk-amd64/include;/usr/lib/jvm/java-7-openjdk-amd64/include;/usr/lib/jvm/java-7-openjdk-amd64/include
-- JNI_LIBRARIES=/usr/lib/jvm/java-7-openjdk-amd64/jre/lib/amd64/libjawt.so;/usr/lib/jvm/java-7-openjdk-amd64/jre/lib/amd64/server/libjvm.so
-- Configuring done
-- Generating done
-- Build files have been written to: /usr/local/mysql-hadoop-applier-0.1.0
It looks like cmake has been compiled successfully.
But when i make -j8 then the errors were there.
Thanks,
RajeshP.
Hi Neha,
DeleteFinally, I have compiled and built happlier.
Thanks,
RajeshP
Hi Neha,
ReplyDeleteWhen starting HIVE I am getting the following errors now.
Booting Derby (version The Apache Software Foundation - Apache Derby - 10.11.1.1 - (1616546)) instance a816c00e-014c-5adc-6872-000002ebb608
on database directory /usr/lib/hive/apache-hive-1.1.0-bin/bin/metastore_db in READ ONLY mode with class loader sun.misc.Launcher$AppClassLoader@6dc57a92.
Loaded from file:/usr/lib/hive/apache-hive-1.1.0-bin/lib/derby-10.11.1.1.jar.
java.vendor=Oracle Corporation
java.runtime.version=1.7.0_75-b13
user.dir=/usr/lib/hive/apache-hive-1.1.0-bin/bin
os.name=Linux
os.arch=amd64
os.version=3.13.0-24-generic
derby.system.home=null
Database Class Loader started - derby.database.classpath=''
Please help me
Thanks,
RajeshP
This comment has been removed by the author.
ReplyDeleteHi Subhangi,
ReplyDeleteDo you help me get Ride of Hadoop ClassPath Issues ,
./happlier --field-delimiter=, mysql://root@127.0.0.1:13000 hdfs://localhost:54310
The default data warehouse directory in HDFS will be set to /usr/hive/warehouse
Change the default data warehouse directory? (Y or N) N
loadFileSystems error:
(unable to get stack trace for java.lang.NoClassDefFoundError exception: ExceptionUtils::getStackTrace error.)
hdfsBuilderConnect(forceNewInstance=0, nn=localhost, port=54310, kerbTicketCachePath=(NULL), userName=(NULL)) error:
(unable to get stack trace for java.lang.NoClassDefFoundError exception: ExceptionUtils::getStackTrace error.)
Couldnot connect to HDFS file system
I am using Hadoop 2.7 , May i know How you Resolve this By setting All Jars To ClassPATH
There will be a lot of difference in attending hadoop online training center compared to attending a live classroom training. However, websites like this with rich in information will be very useful for gaining additional knowledge.
ReplyDeleteHi neha,
ReplyDeletewhile installinf make i was getting this error
make[2]: *** [storage/innobase/CMakeFiles/innobase.dir/row/row0log.cc.o] Error 1
make[1]: *** [storage/innobase/CMakeFiles/innobase.dir/all] Error 2
make: *** [all] Error
thanks in advance.
hi Shubhangi,
ReplyDeleteit is good to hear about mysql-hadoop-aoolier
i tried configuring it but got stuck in make step..
while running make command im getting error
/usr/bin/ld: ../lib/libgtest.a(gtest-all.cc.o): undefined reference to symbol 'pthread_key_delete@@GLIBC_2.2.5'
/lib/x86_64-linux-gnu/libpthread.so.0: error adding symbols: DSO missing from command line
Pleae help me out
Thanks in advance
Your pthread library is not getting linked properly, please see this link for further details
Deletehttp://stackoverflow.com/questions/25617839/undefined-reference-to-symbol-pthread-key-deleteglibc-2-2-5
Hi Shubhangi
ReplyDeletei was suck here could plz see this and tell me the solution.
hduser@hadoop:~/mysql-hadoop-applier-0.1.0/examples/mysql2hdfs$ ./happlier --field-delimiter=, mysql://root@192.168.1.115:3306 hdfs://192.168.1.115:54310
The default data warehouse directory in HDFS will be set to /usr/hive/warehouse
Change the default data warehouse directory? (Y or N) N
loadFileSystems error:
(unable to get stack trace for java.lang.NoClassDefFoundError exception: ExceptionUtils::getStackTrace error.)
hdfsBuilderConnect(forceNewInstance=0, nn=192.168.1.115, port=54310, kerbTicketCachePath=(NULL), userName=(NULL)) error:
(unable to get stack trace for java.lang.NoClassDefFoundError exception: ExceptionUtils::getStackTrace error.)
Couldnot connect to HDFS file system
Thanks
Narendra K
Please check these two links
Deletehttp://stackoverflow.com/questions/21064140/hadoop-c-hdfs-test-running-exception
http://stackoverflow.com/questions/9320619/can-jni-be-made-to-honour-wildcard-expansion-in-the-classpath/9322747#9322747
Let me know if that fixes your issue.
This comment has been removed by the author.
ReplyDeleteThis comment has been removed by the author.
ReplyDeleteHi team,
ReplyDeleteMay i know do we have production ready binary for applier?
Hi team,
ReplyDeletei got an error while executing "#make happlier" command
In file included from /home/hamdi/Downloads/mysql2hadoop/mysql-hadoop-applier-0.1.0/include/my_sys.h:26:0,
from /home/hamdi/Downloads/mysql2hadoop/mysql-hadoop-applier-0.1.0/include/hash.h:22,
from /home/hamdi/Downloads/mysql2hadoop/mysql-hadoop-applier-0.1.0/include/sql_common.h:26,
from /home/hamdi/Downloads/mysql2hadoop/mysql-hadoop-applier-0.1.0/include/protocol.h:27,
from /home/hamdi/Downloads/mysql2hadoop/mysql-hadoop-applier-0.1.0/include/binlog_driver.h:25,
from /home/hamdi/Downloads/mysql2hadoop/mysql-hadoop-applier-0.1.0/include/access_method_factory.h:24,
from /home/hamdi/Downloads/mysql2hadoop/mysql-hadoop-applier-0.1.0/src/access_method_factory.cpp:20:
/home/hamdi/Downloads/mysql2hadoop/mysql-hadoop-applier-0.1.0/include/mysql/psi/mysql_thread.h:88:3: error: ‘my_mutex_t’ does not name a type
my_mutex_t m_mutex;
^
/home/hamdi/Downloads/mysql2hadoop/mysql-hadoop-applier-0.1.0/include/mysql/psi/mysql_thread.h:116:3: error: ‘native_rw_lock_t’ does not name a type
native_rw_lock_t m_rwlock;
^
/home/hamdi/Downloads/mysql2hadoop/mysql-hadoop-applier-0.1.0/include/mysql/psi/mysql_thread.h:132:3: error: ‘rw_pr_lock_t’ does not name a type
rw_pr_lock_t m_prlock;
^
/home/hamdi/Downloads/mysql2hadoop/mysql-hadoop-applier-0.1.0/include/mysql/psi/mysql_thread.h:173:3: error: ‘native_cond_t’ does not name a type
native_cond_t m_cond;
error: ‘my_thread_id’ was not declared in this scope
make[3]: *** [src/CMakeFiles/replication_static.dir/access_method_factory.cpp.o] Error 1
make[2]: *** [src/CMakeFiles/replication_static.dir/all] Error 2
make[1]: *** [examples/mysql2hdfs/CMakeFiles/happlier.dir/rule] Error 2
make: *** [happlier] Error 2
Please help me out
Thanks in advance
Is there any way to know the column name (as String) in the insert transaction? For example:INSERT INTO runoob_tbl1
ReplyDelete(runoob_title, runoob_author, submission_date)
VALUES
("Learn PHP", "John Poul", NOW());
We can get "Learn PHP", "John Poul" from mysql::Row_of_fields, how can we get the corresponding column name - runoob_title, runoob_author? thanks
Really good article. Thanks for sharing very detailed information with us. Here we are in the field of providing a detailed and complete support for those who wanna complete the certification in cca 175 certification and cca 500 certification and more for more info visit us at cca175certification.com
ReplyDeletecca 175 certification
cca 500 certification
Hi Neha Kumari~ I have compile happlier sucessfully. But When I execute it, the error just happened. Below is the error message:
ReplyDeleteThe default data warehouse directory in HDFS will be set to /usr/hive/warehouse
Change the default data warehouse directory? (Y or N) N
16/04/22 15:03:54 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Connected to HDFS file system
The data warehouse directory is set as /user/hive/warehouse
Can't connect to the master.
#
# A fatal error has been detected by the Java Runtime Environment:
#
# SIGSEGV (0xb) at pc=0x0000000000423c8c, pid=20826, tid=140018127853376
#
# JRE version: OpenJDK Runtime Environment (8.0_77-b03) (build 1.8.0_77-b03)
# Java VM: OpenJDK 64-Bit Server VM (25.77-b03 mixed mode linux-amd64 compressed oops)
# Problematic frame:
# C [happlier+0x23c8c] Table_index::~Table_index()+0x4e
#
# Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again
#
# An error report file with more information is saved as:
# /home/liveuser/mysql-hadoop-applier-0.1.0/examples/mysql2hdfs/hs_err_pid20826.log
#
# If you would like to submit a bug report, please visit:
# http://bugreport.java.com/bugreport/crash.jsp
# The crash happened outside the Java Virtual Machine in native code.
# See problematic frame for where to report the bug.
#
Can you help me find out where is the problem?
Thanks
Narendra k
This blog is gives great information on big data hadoop online training in hyderabad, uk, usa, canada.
ReplyDeletebest online hadoop training in hyderabad.
hadoop online training in usa, uk, canada.
Hello,
ReplyDeletePlease i can't found the mysql-hadoop-applier.
It is really nice to see the best blog for HadoopTutorial .This blog helped me a lot easily understandable too.
ReplyDeleteHadoop Training in Velachery | Hadoop Training .
It'sVery informative blog and useful article thank you for sharing with us , keep posting learn more
ReplyDeleteHadoop administration Online Training Hyderabad
nice information
ReplyDeleteYaaron Studios is one of the rapidly growing editing studios in Hyderabad. We are the best Video Editing services in Hyderabad. We provides best graphic works like logo reveals, corporate presentation Etc. And also we gives the best Outdoor/Indoor shoots and Ad Making services.
Best video editing services in Hyderabad,ameerpet
Best Graphic Designing services in Hyderabad,ameerpet
Best Ad Making services in Hyderabad,ameerpet
Nice information
ReplyDeleteSanjary kids is the best playschool, preschool in Hyderabad, India. Start your play school,preschool in Hyderabad with sanjary kids. Sanjary kids provides programs like Play group,Nursery,Junior KG,Serior KG,and Teacher Training Program.
play school in hyderabad
Preschool in hyderabad
Preschool teacher training course in hyderabad
pre and primary teacher training course in hyderabad
Nice blog! i'm also working with a graphic designing services in delhi.
ReplyDeletegraphic designer in delhi
freelance graphic designer in delhi
freelance graphic designer in delhi ncr
freelance graphic designer in noida
freelance logo designer in delhi
freelance logo designer in delhi ncr
freelance web designer in delhi ncr
freelance website designer in delhi ncr
freelance designer in delhi
freelance website designer in delhi
freelance web designer in delhi
freelance graphic designer services in delhi
freelancer graphic designer services in delhi ncr
freelancer graphic designer services in delhi
freelancer graphic services in delhi ncr
freelancer logo services in delhi
freelancer logo services in delhi ncr
freelancer web designer services in delhi ncr
freelancer web designer services in delhi
freelance web designer services in delhi
freelance website designer services in delhi
freelance website designer services in delhi ncr
freelance logo designer service in delhi
freelance logo designer service in delhi ncr
logo designer in delhi
brochure design in delhi
logo design in delhi
freelance logo design in delhi
freelance logo designer in gurgaon
freelance logo designer in noida
Good Information for this blog
ReplyDeleteBest QA / QC Course in India, Hyderabad. sanjaryacademy is a well-known institute. We have offer professional Engineering Course like Piping Design Course, QA / QC Course,document Controller course,pressure Vessel Design Course, Welding Inspector Course, Quality Management Course, #Safety officer course.
QA / QC Course
QA / QC Course in india
QA / QC Course in hyderabad
Really Nice Article & Thanks for sharing.
ReplyDeleteOflox Is The Best Website Design & Development Company In Dehradun
Extraordinary Article! I truly acknowledge this. You are so wonderful! This issue has and still is so significant and you have tended to it so Informative.
ReplyDeleteWe are the best Digital Marketing guys in the corner. We are a Digital Marketing agency based in Hyderabad. Yet, we have clients all across the globe in multiple industries. Start working with a company that can provide everything you need to generate awareness, drive traffic, connect with customers, and increase sales montes. We provide services like Search Engine Optimization,Google Ads & PPC, Website Designing, Email Marketing, and Audit.
Excellent post. I was always checking this blog, and I’m impressed! Extremely useful info specially the last part, I care for such information a lot. I was exploring this particular info for a long time. Thanks to this blog my exploration has ended.
ReplyDeleteIf you want Digital Marketing Serives :-
Digital marketing Service in Delhi
SMM Services
PPC Services in Delhi
Website Design & Development Packages
SEO Services PackagesLocal SEO services
E-mail marketing services
YouTube plans
Thank you for sharing. mustache loss
ReplyDeleteI think this is the best article today. Thanks for taking your own time to discuss this topic, I feel happy about that curiosity has increased to learn more about this topic. Keep sharing your information regularly for my future reference.Excellent blog admin. This is what I have looked. Check out the following links for Software testing companies USA
ReplyDeleteTest automation software
Best automated testing software
Mobile app testing services
really useful stuff
ReplyDeleteaws Training in chennai | aws Course in Chennai
https://www.credosystemz.com/training-in-chennai/best-amazon-web-services-training-in-chennai/
Best Ethical Hacking training institute in Bangalore
ReplyDeletehttps://www.bestforlearners.com/course/bangalore/ethical-hacking-course-training-institutes-in-bangalore
instagram takipçi satın al
ReplyDeleteinstagram takipçi satın al
takipçi satın al
instagram takipçi satın al
instagram takipçi satın al
takipçi satın al
instagram takipçi satın al
aşk kitapları
tiktok takipçi satın al
instagram beğeni satın al
youtube abone satın al
twitter takipçi satın al
tiktok beğeni satın al
tiktok izlenme satın al
twitter takipçi satın al
tiktok takipçi satın al
youtube abone satın al
tiktok beğeni satın al
instagram beğeni satın al
trend topic satın al
trend topic satın al
youtube abone satın al
beğeni satın al
tiktok izlenme satın al
sms onay
youtube izlenme satın al
tiktok beğeni satın al
sms onay
sms onay
perde modelleri
instagram takipçi satın al
takipçi satın al
tiktok jeton hilesi
pubg uc satın al
I think it would be better to make video guide about it. You can share your video on youtube and tiktok and get many likes from here https://soclikes.com
ReplyDeleteThis comment has been removed by the author.
ReplyDeleteThis comment has been removed by the author.
ReplyDeleteThanks for sharing informative post. Are looking for best Tamil typing tool online, make use of our Tamil typing software to make translation faster. Thirumana Porutham in Tamil | Samacheer Kalvi Books PDF
ReplyDeleteTESTİNG 2..
ReplyDeletetest 3
ReplyDeletemalatya masöz
ReplyDeleteağrı masöz
adana masöz
edirne masöz
zonguldak masöz
rize masöz
balıkesir masöz
karabük masöz
kırşehir masöz
konak masöz
kütahya masöz
ReplyDeleteçankırı masöz
afyon masöz
burdur masöz
çorum masöz
bilecik masöz
amasya masöz
yalova masöz
çorlu masöz
şile masöz
uşak masöz
ReplyDeletekilis masöz
osmaniye masöz
siirt masöz
muş masöz
bartın masöz
sivas masöz
şile masöz
ayvalık masöz
sultangazi masöz
alanya eskort
ReplyDeleteafyon eskort
amasya eskort
bayburt eskort
yozgat eskort
ataköy eskort
düzce masöz
manisa masöz
izmit masöz
görükle masöz
mmorpg oyunlar
ReplyDeleteİnstagram takipci satin al
tiktok jeton hilesi
antalya saç ekimi
referans kimliği nedir
instagram takipçi satın al
METİN PVP
TAKİPÇİ
İnstagram Takipçi Satin Al
perde modelleri
ReplyDeleteNUMARA ONAY
MOBİL ODEME BOZDURMA
Nft Nasil Alinir
Ankara Evden Eve Nakliyat
trafik sigortası
dedektör
Web sitesi kurmak
ask kitaplari
1인출장샵 1인출장샵 1인출장샵
ReplyDelete의창구출장마싸지 덕진구출장마싸지 완산구출장마싸지
ataşehir daikin klima servisi
ReplyDeletependik lg klima servisi
pendik alarko carrier klima servisi
pendik daikin klima servisi
kadıköy toshiba klima servisi
tuzla samsung klima servisi
çekmeköy vestel klima servisi
ataşehir vestel klima servisi
çekmeköy bosch klima servisi
I Like your post. It gives more information to us.
ReplyDeleteSex Crimes Lawyer VA
Great share! Thanks for the information. Keep going!
ReplyDeleteInformative. Thanks for the information.
ReplyDeleteGood content. You write beautiful things.
ReplyDeletemrbahis
hacklink
sportsbet
korsan taksi
taksi
sportsbet
vbet
hacklink
vbet
instagram takipçi satın al
ReplyDeletecasino siteleri
sms onay
PCG8
Your post is very beautiful, it is very different, it is very good, there is nothing in this post that does not look good, wish you could write such posts.
ReplyDeleteThe Real Gurugram
About Gurugram Information
The Best OF Gurugram
Real Gurugram
Gurugram News
Digi Versatile Solutions A full fledged digital marketing company in India. We offer branding, SEO, Website Designing, Social Media, and Google promotion services for brands
ReplyDeleteDigi Versatile Sol is located in Hyderabad, India and offers 360 Degree Digital Marketing Services Visit About Us Page for more info
ReplyDeletebaşakşehir
ReplyDeletebayrampaşa
beşiktaş
beykoz
beylikdüzü
CST2
ok
ReplyDeleteConnecting MySQL and Hadoop opens doors to comprehensive data utilization. How Iplayer Watch An insightful guide that deepens the integration's significance and practicality.
ReplyDeleteGreat post.
ReplyDeleteSQL Classes in Pune
Souvenir-IT is the best Digital Marketing agency in Hyderabad.for more details please visit this site:https://souvenir-it.com/
ReplyDeleteشركة مكافحة حشرات بالقطيف
ReplyDeleteمكافحة حشرات
This comment has been removed by the author.
ReplyDeleteGreat Article! Loved how you explained on blog.
ReplyDeleteDigital Marketing Agency in Hyderabad
Thanks and I have a dandy supply: Where To Start With Whole House Renovation house renovation courses
ReplyDeleteTop Digital Marketing Training Institute in Bangalore. Digital marketing is a rapidly growing field and that can help individuals build their careers in this domain. If you're looking to learn digital marketing course in bangalore, .is one of the best digital marketing course in bangalore
ReplyDeleteTop Digital Marketing Training Institutes in Bangalore. Digital marketing is a rapidly growing field and Bangalore has some excellent training institutes that can help individuals build their careers in this domain. If you're looking to learn digital marketing course in bangalore, .Bangaloredigitalmarketing is one of he best training institute.
ReplyDelete