POST

I'm going to repeat an exercise about sklearn.cluster.DBSCAN. Link:https://scikitlearn.org/stable/auto_examples/cluster/plot_dbscan.html#sphxglrautoexamplesclusterplotdbscanpy My question's very simple. I'd like to import .csv file that consists of the points for the exercise. My file has 380000 points, the coordinates x and y are separated by a comma and no headings (mean x or y). The first coordinate is x, and the second is y. print(__doc__)
import numpy as np
from sklearn.cluster import DBSCAN
from sklearn import metrics
from sklearn.datasets import make_blobs
from sklearn.preprocessing import StandardScaler
# #############################################################################
# Generate sample data
centers = [[1, 1], [1, 1], [1, 1]]
X, labels_true = make_blobs(n_samples=750, centers=centers, cluster_std=0.4,
random_state=0)
X = StandardScaler().fit_transform(X)
# #############################################################################
# Compute DBSCAN
db = DBSCAN(eps=0.3, min_samples=10).fit(X)
core_samples_mask = np.zeros_like(db.labels_, dtype=bool)
core_samples_mask[db.core_sample_indices_] = True
labels = db.labels_
# Number of clusters in labels, ignoring noise if present.
n_clusters_ = len(set(labels))  (1 if 1 in labels else 0)
n_noise_ = list(labels).count(1)
print('Estimated number of clusters: %d' % n_clusters_)
print('Estimated number of noise points: %d' % n_noise_)
print("Homogeneity: %0.3f" % metrics.homogeneity_score(labels_true, labels))
print("Completeness: %0.3f" % metrics.completeness_score(labels_true, labels))
print("Vmeasure: %0.3f" % metrics.v_measure_score(labels_true, labels))
print("Adjusted Rand Index: %0.3f"
% metrics.adjusted_rand_score(labels_true, labels))
print("Adjusted Mutual Information: %0.3f"
% metrics.adjusted_mutual_info_score(labels_true, labels))
print("Silhouette Coefficient: %0.3f"
% metrics.silhouette_score(X, labels))
# #############################################################################
# Plot result
import matplotlib.pyplot as plt
# Black removed and is used for noise instead.
unique_labels = set(labels)
colors = [plt.cm.Spectral(each)
for each in np.linspace(0, 1, len(unique_labels))]
for k, col in zip(unique_labels, colors):
if k == 1:
# Black used for noise.
col = [0, 0, 0, 1]
class_member_mask = (labels == k)
xy = X[class_member_mask & core_samples_mask]
plt.plot(xy[:, 0], xy[:, 1], 'o', markerfacecolor=tuple(col),
markeredgecolor='k', markersize=14)
xy = X[class_member_mask & ~core_samples_mask]
plt.plot(xy[:, 0], xy[:, 1], 'o', markerfacecolor=tuple(col),
markeredgecolor='k', markersize=6)
plt.title('Estimated number of clusters: %d' % n_clusters_)
plt.show()
... View more
a month ago

0

1

120

POST

Good day, Everybody. I have a task I should create many circles in a point cloud (my file can be .csv) and calculate their diameter in the area 100m x 100 m. I have the coordinates of the starting point. The circles should be created at the height 1.35 m from the initial point. For example, I have z=600 m, so I will create all circles at 601.35 m. The main issue is there are a lot of points at least 2 million, and some of them are spread. I want my script to move a circle at a height of 601.35 m and change the diameter of the circle to fit the shape of the circle. The main purpose is to find enough points (around 20 points) that fit in the circle and show the diameter of each circle. The diameter will be from 0.1 m to 0.9 m. I give the figure to demonstrate my idea. I will be happy to get any help. Probably, somebody has already developed a similar script. As to the final output, a list with the diameters in .csv will be okay. As to modules, I think about scikitlearn. Thank you very much in advance.
... View more
a month ago

0

1

177

POST

Good day, I am interested in the full workflow to conduct machine learning classification. I have .las data imported in ArcGIS pro. I don't know how to create Training Data (.las) for arcgis.learn and the other steps for the point cloud. There will be three classes, as such: ground, trunk, and leaves.
... View more
07302021
08:56 PM

0

1

466

Online Status 
Offline

Date Last Visited 
4 weeks ago
