1

Current setup

mysql connector version-mysql-connector-java-5.1.13
sqoop version-sqoop-1.4.6
hadoop version-hadoop-2.7.3
java version- Jdk-8u171-linux-x64/jdk1.8.0_171(oracle JDK)
OS-Ubundu

Note: Also tried with openjdk , same issue exist with this version also Sqoop Command : bin/sqoop import -connect jdbc:mysql://localhost:3306/testDb -username root -password root --table student --target-dir /user/hadoop/student -m 1 --driver com.mysql.jdbc.Driver

enter image description here

OneCricketeer
  • 179,855
  • 19
  • 132
  • 245
  • How large is the table you're importing? Says you're using over 4GB memory... How much is on your machine? – OneCricketeer May 03 '18 at 06:14
  • The table contains only three records and my machine RAM is 4GB – ruchika doifode May 04 '18 at 05:12
  • Okay, so you're on a single node... Datanode, NodeManager, and ResourceManagers each take 1 GB of memory by default, not leaving much room for the OS and anything else... You need to tune down the yarn-site.xml memory settings at least to about 256MB per container and application master – OneCricketeer May 04 '18 at 06:56

1 Answers1

0

Try to increase mapper parallelism (in your command it is -m 1 parameter). Set it to higher value, so each mapper will process less data and require less memory.
Also --split-by is necessary if the number of mappers>1.

See the suggestions about split-by column here.

Evenly distributed integer column is preferable.

leftjoin
  • 36,950
  • 8
  • 57
  • 116