Questions tagged [perceptron]

Perceptron is a basic linear classifier that outputs binary labels.

Perceptron is a basic linear classifier that outputs binary labels. If the training data set is not linear separable, the learning algorithm cannot converge.

A classical problem of XOR is a dataset that is not linear separable. A perceptron does not work in this case. By adding nonlinear layers between the input and output, one can separate all data. With enough training data, the resulting network is able to model any well-defined function to arbitrary precision. This model is a generalization known as a multilayer perceptron.

For more details about , see wiki.

489 questions
114
votes
4 answers

multi-layer perceptron (MLP) architecture: criteria for choosing number of hidden layers and size of the hidden layer?

If we have 10 eigenvectors then we can have 10 neural nodes in input layer.If we have 5 output classes then we can have 5 nodes in output layer.But what is the criteria for choosing number of hidden layer in a MLP and how many neural nodes in 1…
72
votes
4 answers

Perceptron learning algorithm not converging to 0

Here is my perceptron implementation in ANSI C: #include #include #include float randomFloat() { srand(time(NULL)); float r = (float)rand() / (float)RAND_MAX; return r; } int calculateOutput(float…
Richard Knop
  • 81,041
  • 149
  • 392
  • 552
44
votes
5 answers

Why is weight vector orthogonal to decision plane in neural networks

I am beginner in neural networks. I am learning about perceptrons. My question is Why is weight vector perpendicular to decision boundary(Hyperplane)? I referred many books but all are mentioning that weight vector is perpendicular to decision…
29
votes
4 answers

Training on imbalanced data using TensorFlow

The Situation: I am wondering how to use TensorFlow optimally when my training data is imbalanced in label distribution between 2 labels. For instance, suppose the MNIST tutorial is simplified to only distinguish between 1's and 0's, where all…
16
votes
2 answers

Intuition for perceptron weight update rule

I am having trouble understanding the weight update rule for perceptrons: w(t + 1) = w(t) + y(t)x(t). Assume we have a linearly separable data set. w is a set of weights [w0, w1, w2, ...] where w0 is a bias. x is a set of input parameters [x0, x1,…
joshreesjones
  • 1,934
  • 5
  • 24
  • 42
16
votes
3 answers

How do you draw a line using the weight vector in a Linear Perceptron?

I understand the following: In 2D space, each data point has 2 features: x and y. The weight vector in 2D space contains 3 values [bias, w0, w1] which can be rewritten as [w0,w1,w2]. Each datapoint needs an artificial coordinate [1, x, y] for the…
user1337603
  • 285
  • 2
  • 4
  • 9
12
votes
2 answers

Multi dimensional inputs in pytorch Linear method?

When building a simple perceptron neural network we usuall passes a 2D matrix of input of format (batch_size,features) to a 2D weight matrix, similar to this simple neural network in numpy. I always assumed a Perceptron/Dense/Linear layer of a…
Eka
  • 14,170
  • 38
  • 128
  • 212
11
votes
3 answers

Parameter Tuning for Perceptron Learning Algorithm

I'm having sort of an issue trying to figure out how to tune the parameters for my perceptron algorithm so that it performs relatively well on unseen data. I've implemented a verified working perceptron algorithm and I'd like to figure out a method…
11
votes
7 answers

Geometric representation of Perceptrons (Artificial neural networks)

I am taking this course on Neural networks in Coursera by Geoffrey Hinton (not current). I have a very basic doubt on weight spaces. https://d396qusza40orc.cloudfront.net/neuralnets/lecture_slides%2Flec2.pdf Page 18. If I have a weight vector (bias…
kosmos
  • 359
  • 5
  • 13
9
votes
3 answers

What's the point of the threshold in a perceptron?

I'm having trouble seeing what the threshold actually does in a single-layer perceptron. The data is usually separated no matter what the value of the threshold is. It seems a lower threshold divides the data more equally; is this what it is used…
Hypercube
  • 1,181
  • 2
  • 10
  • 16
9
votes
3 answers

Can a perceptron be used to detect hand-written digits?

Let's say I have a small bitmap which contains a single digit (0..9) in hand writing. Is it possible to detect the digit using a (two-layered) perceptron? Are there other possibilities to detect single digits from bitmaps besides using neural nets?
9
votes
5 answers

implementing a perceptron classifier

Hi I'm pretty new to Python and to NLP. I need to implement a perceptron classifier. I searched through some websites but didn't find enough information. For now I have a number of documents which I grouped according to category(sports,…
9
votes
2 answers

Why won't Perceptron Learning Algorithm converge?

I have implemented the Perceptron Learning Algorithm in Python as below. Even with 500,000 iterations, it still won't converge. I have a training data matrix X with target vector Y, and a weight vector w to be optimized. My update rule is:…
manbearpig
  • 153
  • 1
  • 2
  • 6
8
votes
1 answer

Multilayer-perceptron, visualizing decision boundaries (2D) in Python

I have programmed a multilayer perception for binary classification. As I understand it, one hidden layer can be represented using just lines as decision boundaries (one line per hidden neuron). This works well and can easily be plotted just using…
johnblund
  • 402
  • 5
  • 21
7
votes
2 answers

Why does single-layer perceptron converge so slow without normalization, even when the margin is large?

This question is totally re-written after I confirmed my results (the Python Notebook can be found here) with a piece of code written by someone else (can be found here). Here is that code instrumented by me to work with my data and to count epochs…
AlwaysLearning
  • 7,257
  • 4
  • 33
  • 68
1
2 3
32 33