Although your data preprocessing steps for deep learning, or the procedure for the construction of your model might not be correct (i.e., numbers of the y-axis of 1st chart, results on the 2nd chart), I will focus this answer on why "every time the code is running the graphs are different", and on "the numbers on the y-axis".
Numbers on the y-axis: in the 2nd chart, accuracy is in range of 0 to 1, while the 3rd takes values from 0 to 100 to express the error percentage.
Why graphs get different after iteration of model execution:
Getting different results is not a bug, rather a feature in machine learning [1] because it includes stochastic (non-deterministic) processes [2], by the explanation that algorithms make use of randomness or probabilistic decisions.
This variance in results might be due to differences in training data, because of stochastic learning algorithms or evaluation procedures, or because of differences in platform (running the model from different machine). In other words, randomness might be found on data collection, observation order, algorithm, sampling, or resampling [3]. For example, during training a deep neaural network, the algorithm might use randomness because of random initial weights (coefficients), or random shuffle of samples in each epoch.
Can you fix that?
Yes. You can control the randomness by assigning a fixed value to "random state" or "random seed
" [4, 5, 6, 7, 8]. "random_state
" variable
controls the shuffling applied to the data before applying the split [9].
I'm not familiar with R, hence the above documentation in scikit-learn with Python, but I suppose you can do something similar with the "seed.number
" based on R documentation [10], i.e. by adding seed.number="some value" next to validation_split during model fitting.
I advise you on having a look at the articles of the references which I hope you find helpful.