I am answering this based on my experience with v2.1.0 with K8s v1.20.
When kubernetes-dashboard is installed, it created a service account and two roles called "kubernetes-dashboard" and binds the roles with the dashboard namespace and the other with a cluster-wide role (but not cluster-admin). So, unfortunately the permissions are not sufficient to manage the entire cluster, as can be seen here:

Log from installation:
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.1.0/aio/deploy/recommended.yaml
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created
Looking at the permissions you see:
$ kubectl describe clusterrole kubernetes-dashboard
Name: kubernetes-dashboard
Labels: k8s-app=kubernetes-dashboard
Annotations: <none>
PolicyRule:
Resources Non-Resource URLs Resource Names Verbs
--------- ----------------- -------------- -----
nodes.metrics.k8s.io [] [] [get list watch]
pods.metrics.k8s.io [] [] [get list watch]
$ kubectl describe role kubernetes-dashboard -n kubernetes-dashboard
Name: kubernetes-dashboard
Labels: k8s-app=kubernetes-dashboard
Annotations: <none>
PolicyRule:
Resources Non-Resource URLs Resource Names Verbs
--------- ----------------- -------------- -----
secrets [] [kubernetes-dashboard-certs] [get update delete]
secrets [] [kubernetes-dashboard-csrf] [get update delete]
secrets [] [kubernetes-dashboard-key-holder] [get update delete]
configmaps [] [kubernetes-dashboard-settings] [get update]
services/proxy [] [dashboard-metrics-scraper] [get]
services/proxy [] [heapster] [get]
services/proxy [] [http:dashboard-metrics-scraper] [get]
services/proxy [] [http:heapster:] [get]
services/proxy [] [https:heapster:] [get]
services [] [dashboard-metrics-scraper] [proxy]
services [] [heapster] [proxy]
Rather than making the kubernetes-dashboard service account a cluster-admin, as that account is used for data collection, a better approach is to create a new service account which only has a Token and that ways the account can easily be revoked instead of permissions changed for pre-created account.
To create a new service account called "dashboard-admin" and apply declaratively:
$ nano dashboard-svcacct.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: dashboard-admin
namespace: kubernetes-dashboard
$ kubectl apply -f dashboard-svcacct.yaml
serviceaccount/dashboard-admin created
To bind that new service account to a cluster admin role:
$ nano dashboard-binding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: dashboard-admin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: dashboard-admin
namespace: kubernetes-dashboard
$ kubectl apply -f dashboard-binding.yaml
clusterrolebinding.rbac.authorization.k8s.io/dashboard-admin created
To extract the token from this service account which can be used to login:
$ kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep dashboard-admin | awk '{print $1}')
Name: dashboard-admin-token-4fxtt
Namespace: kubernetes-dashboard
Labels: <none>
Annotations: kubernetes.io/service-account.name: dashboard-admin
kubernetes.io/service-account.uid: 9cd5bb80-7901-413b-9eac-7b72c353d4b9
Type: kubernetes.io/service-account-token
Data
====
ca.crt: 1066 bytes
namespace: 20 bytes
token: eyJhbGciOiJSUzI1NiIsImtpZCI6Ikp3ZERpQTFPOV<REDACTED>
The entire token which starts with "eyJ" can be used to login now:

But cut & paste of the token login can become a pain in the rear, especially given default timeout. I prefer a config file. For this option the cluster CA hash will be needed. The cluster part of this this config file is the same as the config file under ~/.kube/config. This config file does not need to be loaded to the kubernetes master, just need it on the workstation with the browser from which the dashboard is being accessed. I named it dashboard-config and used VS Code to create it (any editor, just need to make sure that you unwrap the text to make sure no spaces in the hash values). There is no need to keep any of the admin CA and Private Key hashes under users: if copying the config file.
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: <CLUSTER CA HASH HERE>
server: https://<IP ADDR OF CLUSTER>:6443
name: kubernetes #name of cluster
contexts:
- context:
cluster: kubernetes
user: dashboard-admin
name: dashboard-admin@kubernetes
current-context: dashboard-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-dashboard
user:
token: <TOKEN HASH from above command e.g. eyJ>
And it works now.