0

I make the setting for spark and Hadoop.

I download it, and I set env. and I call pyspark in the shell.

and here is my result about 'pyspark'.

dino@ubuntu:~$ pyspark

Python 2.7.12 (default, Dec  4 2017, 14:50:18)

[GCC 5.4.0 20160609] on linux2

Type "help", "copyright", "credits" or "license" for more information.

Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
Setting default log level to "WARN".

To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).

18/01/24 18:25:14 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

18/01/24 18:25:14 WARN Utils: Your hostname, ubuntu resolves to a loopback address: 127.0.1.1; using 192.168.20.147 instead (on interface ens33)

18/01/24 18:25:14 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address

18/01/24 18:25:24 WARN ObjectStore: Failed to get database global_temp, returning NoSuchObjectException

Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /__ / .__/\_,_/_/ /_/\_\   version 2.2.1
      /_/

Using Python version 2.7.12 (default, Dec  4 2017 14:50:18)

SparkSession available as 'spark'.

>>> 

( I am using Ubuntu 16.04, my spark version is 2.2.1, hadoop -> 2.7.5)

There are some WARN messages, and I want to know about these and how to fix.

whackamadoodle3000
  • 6,684
  • 4
  • 27
  • 44
dino
  • 47
  • 6
  • 1
    There's nothing that needs fixed here, and it literally tells you how to change logging level in the message. Also, `log4j-defaults.properties`. The level is set to WARN, as it says – OneCricketeer Jan 25 '18 at 02:52
  • Oh I see!! then, i have hadoop in my vm, and why 'Unable to load native-hadoop' warn messege is occured??? – dino Jan 25 '18 at 03:42
  • https://stackoverflow.com/a/24927214/2308683 – OneCricketeer Jan 25 '18 at 05:52

0 Answers0