Этот сайт использует файлы cookies. Продолжая просмотр страниц сайта, вы соглашаетесь с использованием файлов cookies. Если вам нужна дополнительная информация, пожалуйста, посетите страницу Политика файлов Cookie
Subscribe
Прямой эфир
Cryptocurrencies: 9944 / Markets: 87551
Market Cap: $ 2 310 356 654 676 / 24h Vol: $ 90 815 107 536 / BTC Dominance: 59.397492155094%

Н Новости

Дерево решений (CART). От теоретических основ до продвинутых техник и реализации с нуля на Python

945006e6220f8603ce6498edd32b4581.png

Дерево решений CART (Classification and Regressoin Tree) — алгоритм классификации и регрессии, основанный на бинарном дереве и являющийся фундаментальным компонентом случайного леса и бустингов, которые входят в число самых мощных алгоритмов машинного обучения на сегодняшний день. Деревья также могут быть не бинарными в зависимости от реализации. К другим популярным реализациям решающего дерева относятся следующие: ID3, C4.5, C5.0.

Ноутбук с данными алгоритмами можно загрузить на Kaggle (eng) и GitHub (rus).

Структура дерева решений

Решающее дерево состоит из следующих компонентов: корневой узел, ветви (левая и правая), решающие и листовые (терминальные) узлы. Корневой и решающие узлы представляют из себя вопросы с пороговым значением для разделения тренировочного набора на части (левая и правая), а листья являются конечными прогнозами: среднее значений в листе для регрессии и статистическая мода для классификации.

Структура CART
Структура CART

Каждый листовой узел соответствует определённой прямоугольной области на графике границ решений между двумя признаками. Если на графике соседние участки имеют одинаковое значение, то они автоматически объединяются и представляются как одна большая область.

График границ решений
График границ решений

Выбор наилучшего разбиения

Выбор наилучшего разбиения при построении решающего узла в дереве напоминает игру, в которой нужно угадать знаменитость, задавая вопросы, на которые можно лишь услышать ответ "да" либо "нет". Логично, что для быстрого поиска правильного ответа необходимо задавать вопросы, которые исключат наибольшее количество неверных вариантов, например, вопрос про "пол" позволит исключить сразу же половину вариантов, в то время как вопрос про "возраст" будет менее информативным. Проще говоря, выбор наилучшего вопроса заключается в поиске признака, определённое значение которого лучше всего отделяет правильный ответ от неправильных.

Показатель того, насколько хорошо вопрос в решающем узле позволяет отделить верный ответ от неверных, называется мерой загрязнённости узла. В случае классификации для оценки качества разбиения узла используются следующие критерии:

  • Неопределённость (загрязнённость) Джини — мера разнообразия в распределении вероятностей классов. Если все элементы в узле принадлежат к одному классу, то неопределённость Джини равна 0, а в случае равномерного распределения классов в узле неопределённость Джини равна 0.5.

G_{i} = 1 - \sum\limits_{k = 1}^{n} P_{i,k}^{2}
  • Энтропия Шеннона — мера неопределённости или беспорядка классов в узле. Она характеризует количество информации, которое необходимо для описания состояния системы: чем выше значение энтропии, тем менее упорядочена система и наоборот.

S_{i} = - \sum\limits_{k = 1}^{n} P_{i,k} \ log_{2}P_{i,k}
  • Ошибка классификации — величина, отображающая долю неправильно классифицированных элементов в узле: чем меньше данное значение, тем меньше загрязнённость в узле.

E_{i} = 1 - max\ P_{i,k}

В данном случае P_{i, k} — это доля k-го класса среди обучающих образцов в i-ом узле.

На практике чаще всего используются неопределённость Джини и энтропия Шеннона за счёт большей информативности. Как видно из графика для случая бинарной классификации (где P+ — вероятность принадлежности к классу "+"), график удвоенной неопределённости Джини очень схож с графиком энтропии Шеннона: в первом случае будут получаться чуть менее сбалансированные деревья, однако при работе с большими датасетами неопределённость Джини более предпочтительна за счёт меньшей вычислительной сложности.

Код для отрисовки графика

import numpy as np
import matplotlib.pyplot as plt


def gini(probas):
    return np.array([1- (p ** 2 + (1-p) ** 2) for p in probas])


def entropy(probas):
    return np.array([-1 * (p * np.log2(p) + (1-p) * np.log2(1-p)) for p in probas])


def misclass_error_rate(probas):
    return np.array([1 - max([p, 1-p]) for p in probas])


probas = np.linspace(0, 1, 250)
plt.plot(probas, entropy(probas), label="Shannon's entropy")
plt.plot(probas, 2 * gini(probas),  label="Gini impurity x 2")
plt.plot(probas, 2 * misclass_error_rate(probas), label="Misclass error x 2")
plt.plot(probas, gini(probas), label="Gini impurity")
plt.plot(probas, misclass_error_rate(probas), label="Misclass error")
plt.title("Splitting criteria from P+ (binary classification case)")
plt.xlabel("P+")
plt.ylabel("Impurity")
plt.legend();
c904c8f96f5e22ab6d68d09fe69a4c1b.png

В случае регрессии для оценки качества разбиения узла чаще всего используется среднеквадратичная ошибка, но также могут быть использованы Friedman MSE и MAE.

Функция потерь

Так как же в конечном счёте происходит выбор наилучшего разбиения? После выбора одного из критериев оценки качества разбиения узла (например, неопределённость Джини или MSE), для всех уникальных значений признака берутся их пороговые значения, отсортированные по возрастанию и представленные как среднее арифметическое между соседними значениями. Далее обучающий набор разделяется на 2 поднабора (узла): всё что меньше либо равно текущего порогового значения идёт в левый поднабор, а всё что больше — в правый. Для полученных поднаборов рассчитываются загрязнённости на основе выбранного критерия, после чего их взвешенная сумма представляется как функция потерь, значение которой будет соответствовать пороговому значению признака. Порог с наименьшим значением функции потерь в обучающем наборе (поднаборе) будет наилучшим разбиением.

Функции потерь будут иметь следующий вид:

  • для классификации:

J(k, t_{k}) = \frac{N_{m}^{left}}{N_{m}} G_{left} + \frac{N_{m}^{right}}{N_{m}} G_{right}
  • для регрессии:

J(k, t_{k}) = \frac{N_{m}^{left}}{N_{m}} MSE_{left} + \frac{N_{m}^{right}}{N_{m}} MSE_{right}

где J(k, t_{k}) \to min.

В случае с энтропией используется немного иной подход: рассчитывается так называемый информационный прирост — разница энтропий родительского и дочерних узлов. Порог с максимальным информационным приростом в обучающем наборе/поднаборе будет соответствовать наилучшему разбиению. :

IG(Q) = S_{parent} - (\frac{N_{m}^{left}}{N_{m}} S_{child}^{left} + \frac{N_{m}^{right}}{N_{m}} S_{child}^{right}) \to max

где Q — условие (вопрос) для разбиения поднабора m.

Для наглядности рассмотрим следующий пример. Допустим, у нас есть 4 кота и 4 собаки, а также нам известны их некоторые визуальные признаки: "рост", "уши" (висячие и заострённые стоячие) и "усы" (наличие либо отсутствие). Как в дереве решений строится корневой узел для классификации собак и котов? Очень просто: сначала для каждого признака животные разделяются на 2 группы согласно вопросу, после чего для каждой из групп рассчитывается неопределённость Джини. Пороговое значение признака с наименьшим значением функции потерь (взвешенной суммой неопределённостей) будет использоваться для разбиения корневого (решающего) узла. В данном случае признак "рост" имеет наименьшую загрязнённость и вопрос в решающем узле будет выглядеть как "рост ⩽ 20 см".

Стоит добавить, что в случае с категориальными признаками происходит их бинаризация и вопросы выглядят следующим образом: "уши ⩽ 0.5", "усы ⩽ 0.5". Когда категориальные признаки могут принимать более 2 значений, применяются другие виды кодирования, например label или one-hot encoding.

6e329e78fd344316b7e461720a59a876.png

Принцип работы дерева решений (CART)

Алгоритм строится следующим образом:

  • 1) создаётся корневой узел на основе наилучшего разбиения;

  • 2) тренировочный набор разбивается на 2 поднабора: всё что соответствует условию разбиения отправляется в левый узел, остальное — в правый;

  • 3) далее рекурсивно для каждого тренировочного поднабора повторяются шаги 1-2 пока не будет достигнут один из основных критериев останова: максимальная глубина, максимальное количество листьев, минимальное количество наблюдений в листе или минимальное снижение загрязнения в узле.

Регуляризация дерева решений

Выращенная без ограничений древовидная структура в той или иной степени будет склонна к переобучению и для решения данной проблемы используется 2 подхода: pre-pruning (ограничение роста дерева во время построения любым из критериев останова) и post-pruning (отсечение лишних ветвей после полного построения). Второй подход является более деликатным так как позволяет получить несимметричную и более точную древовидную структуру, оставляя лишь самые информативные решающие узлы.

Существует 2 типа post-puning'а:

  • Top-down pruning — метод, при котором проверка и обрезка наименее информативных ветвей начинается с корневого узла. Данный метод обладает относительно низкой вычислительной сложностью, однако, как и в случае с pre-pruning'ом, его главным недостатком также является возможность недообучения за счёт удаления ветвей, которые могли потенциально содержать информативные узлы. К самым известным видам данного прунинга относятся следующие:

    • Pessimistic Error Pruning (PEP), когда обрезаются ветви с наибольшей ожидаемой ошибкой, порог которой устанавливается заранее;

    • Critical Value Pruning (CVP), когда обрезаются ветви, информативность которых меньше определённого критического значения.

  • Bottom-up pruning — метод, при котором проверка и обрезка наименее информативных ветвей начинается с листьев. В данном случае получаются более точные деревья за счёт полного обхода снизу-вверх и оценки каждого решающего узла, однако это приводит к увеличению вычислительной сложности. Самыми популярными видами данного прунинга являются следующие:

    • Minimum Error Pruning (MEP), когда происходит поиск дерева с наименьшей ожидаемой ошибкой на отложенной выборке;

    • Reduced Error Pruning (REP), когда решающие узлы удаляются до тех пор, пока не падает точность, измеренная на отложенной выборке;

    • Cost-complexity pruning (CCP), когда строится серия поддеревьев через удаление слабейших узлов в каждом из них с помощью коэффициента, рассчитанного как разность ошибки корневого узла поддерева и общей ошибки его листьев, а выбор наилучшего поддерева производится на тестовом наборе или с помощью k-fold кросс-валидации.

Дерево до и после post-pruning'а
Дерево до и после post-pruning'а

Minimal cost-complexity pruning

В реализации scikit-learn для деревьев решений используется модификация cost-complexity pruning, которая работает следующим образом:

  • 1) сначала строится полное дерево без ограничений;

  • 2) далее абсолютно для всех узлов в дереве рассчитывается ошибка на основе взвешенной загрязнённости в случае классификации или взвешенной MSE в случае регрессии;

  • 3) для каждого поддерева в дереве подсчитывается совокупная ошибка его листьев;

  • 4) для каждого поддерева в дереве рассчитывается коэффициент альфа, представленный как разность ошибки корневого узла поддерева и совокупная ошибка его листьев;

  • 5) поддерево с наименьшим \alpha_{ccp} удаляется и становится листовым узлом, а сам коэффициент хранится в массиве cost_complety_pruning_path и соответствует новому обрезанному дереву;

  • 6) шаги 2-5 рекурсивно повторяются для каждого поддерева до тех пор, пока обрезка не дойдёт до корневого узла.

Если задавать определённое значение \alpha_{ccp} изначально, то данный коэффициент применится к каждому поддереву и в итоге останется поддерево с наименьшей ошибкой среди всех поддеревьев, а выбор наилучшего \alpha_{ccp} из cost_complety_pruning_path для получения самого точного поддерева производится на тестовом наборе или с помощью k-fold кросс-валидации.

Формулы для расчётов

Регуляризация дерева:

R_\alpha(T) = R(T) + \alpha|\widetilde{T}|

Эффективный \alpha_{ccp}:

\alpha_{ccp} = \frac{R_{t} - R(T_{t})}{|T| - 1}

Ошибка решающего R_{t} или листового R(T) узлов для классификации:

R_{node} = \frac{N_{m}}{N} G_{m}

Ошибка решающего R_{t} или листового R(T) узлов для регрессии:

R_{node} = \frac{N_{m}}{N} MSE_{m}

Совокупная ошибка листьев в дереве/поддереве:

R(T_{t}) = \sum\limits_{i=1}^{n} R(T_{i})

T — число терминальных (листовых) узлов.

Схема работы cost-complexity pruning'а для простейшего дерева
Схема работы cost-complexity pruning'а для простейшего дерева

Импорт необходимых библиотек

import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.datasets import load_linnerud
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score, mean_absolute_percentage_error
from sklearn.tree import DecisionTreeClassifier, DecisionTreeRegressor, plot_tree
from mlxtend.plotting import plot_decision_regions
from copy import deepcopy
from pprint import pprint

Реализация на Python с нуля

В оригинальном дереве для создания узлов и хранения в них информации используется отдельный класс, но в данном случае дерево и вся информация об узлах хранятся в словаре с ключами в строчном формате. Такое изменение используется для возможности вывода полученного дерева и сравнения с реализацией scikit-learn.

class DecisionTreeCART:

    def __init__(self, max_depth=100, min_samples=2, ccp_alpha=0.0, regression=False):
        self.max_depth = max_depth
        self.min_samples = min_samples
        self.ccp_alpha = ccp_alpha
        self.regression = regression
        self.tree = None
        self._y_type = None
        self._num_all_samples = None

    def _set_df_type(self, X, y, dtype):
        X = X.astype(dtype)
        y = y.astype(dtype) if self.regression else y
        self._y_dtype = y.dtype

        return X, y

    @staticmethod
    def _purity(y):
        unique_classes = np.unique(y)

        return unique_classes.size == 1

    @staticmethod
    def _is_leaf_node(node):
        return not isinstance(node, dict)   # if a node/tree is a leaf

    def _leaf_node(self, y):
        class_index = 0

        return np.mean(y) if self.regression else y.mode()[class_index]

    def _split_df(self, X, y, feature, threshold):
        feature_values = X[feature]
        left_indexes = X[feature_values <= threshold].index
        right_indexes = X[feature_values > threshold].index
        sizes = np.array([left_indexes.size, right_indexes.size])

        return self._leaf_node(y) if any(sizes == 0) else left_indexes, right_indexes

    @staticmethod
    def _gini_impurity(y):
        _, counts_classes = np.unique(y, return_counts=True)
        squared_probabilities = np.square(counts_classes / y.size)
        gini_impurity = 1 - sum(squared_probabilities)

        return gini_impurity

    @staticmethod
    def _mse(y):
        mse = np.mean((y - y.mean()) ** 2)

        return mse

    @staticmethod
    def _cost_function(left_df, right_df, method):
        total_df_size = left_df.size + right_df.size
        p_left_df = left_df.size / total_df_size
        p_right_df = right_df.size / total_df_size
        J_left = method(left_df)
        J_right = method(right_df)
        J = p_left_df*J_left + p_right_df*J_right

        return J  # weighted Gini impurity or weighted mse (depends on a method)

    def _node_error_rate(self, y, method):
        if self._num_all_samples is None:
            self._num_all_samples = y.size   # num samples of all dataframe
        current_num_samples = y.size

        return current_num_samples / self._num_all_samples * method(y)

    def _best_split(self, X, y):
        features = X.columns
        min_cost_function = np.inf
        best_feature, best_threshold = None, None
        method = self._mse if self.regression else self._gini_impurity

        for feature in features:
            unique_feature_values = np.unique(X[feature])

            for i in range(1, len(unique_feature_values)):
                current_value = unique_feature_values[i]
                previous_value = unique_feature_values[i-1]
                threshold = (current_value + previous_value) / 2
                left_indexes, right_indexes = self._split_df(X, y, feature, threshold)
                left_labels, right_labels = y.loc[left_indexes], y.loc[right_indexes]
                current_J = self._cost_function(left_labels, right_labels, method)

                if current_J <= min_cost_function:
                    min_cost_function = current_J
                    best_feature = feature
                    best_threshold = threshold

        return best_feature, best_threshold

    def _stopping_conditions(self, y, depth, n_samples):
        return self._purity(y), depth == self.max_depth, n_samples < self.min_samples

    def _grow_tree(self, X, y, depth=0):
        current_num_samples = y.size
        X, y = self._set_df_type(X, y, np.float128)
        method = self._mse if self.regression else self._gini_impurity

        if any(self._stopping_conditions(y, depth, current_num_samples)):
            RTi = self._node_error_rate(y, method)   # leaf node error rate
            leaf_node = f'{self._leaf_node(y)} | error_rate {RTi}'
            return leaf_node

        Rt = self._node_error_rate(y, method)   # decision node error rate
        best_feature, best_threshold = self._best_split(X, y)
        decision_node = f'{best_feature} <= {best_threshold} | ' \
                        f'as_leaf {self._leaf_node(y)} error_rate {Rt}'

        left_indexes, right_indexes = self._split_df(X, y, best_feature, best_threshold)
        left_X, right_X = X.loc[left_indexes], X.loc[right_indexes]
        left_labels, right_labels = y.loc[left_indexes], y.loc[right_indexes]

        # recursive part
        tree = {decision_node: []}
        left_subtree = self._grow_tree(left_X, left_labels, depth+1)
        right_subtree = self._grow_tree(right_X, right_labels, depth+1)

        if left_subtree == right_subtree:
            tree = left_subtree
        else:
            tree[decision_node].extend([left_subtree, right_subtree])

        return tree

    def _tree_error_rate_info(self, tree, error_rates_list):
        if self._is_leaf_node(tree):
            *_, leaf_error_rate = tree.split()
            error_rates_list.append(np.float128(leaf_error_rate))
        else:
            decision_node = next(iter(tree))
            left_subtree, right_subtree = tree[decision_node]
            self._tree_error_rate_info(left_subtree, error_rates_list)
            self._tree_error_rate_info(right_subtree, error_rates_list)

        RT = sum(error_rates_list)   # total leaf error rate of a tree
        num_leaf_nodes = len(error_rates_list)

        return RT, num_leaf_nodes

    @staticmethod
    def _ccp_alpha_eff(decision_node_Rt, leaf_nodes_RTt, num_leafs):

        return (decision_node_Rt - leaf_nodes_RTt) / (num_leafs - 1)

    def _find_weakest_node(self, tree, weakest_node_info):
        if self._is_leaf_node(tree):
            return tree

        decision_node = next(iter(tree))
        left_subtree, right_subtree = tree[decision_node]
        *_, decision_node_error_rate = decision_node.split()

        Rt = np.float128(decision_node_error_rate)
        RTt, num_leaf_nodes = self._tree_error_rate_info(tree, [])
        ccp_alpha = self._ccp_alpha_eff(Rt, RTt, num_leaf_nodes)
        decision_node_index, min_ccp_alpha_index = 0, 1

        if ccp_alpha <= weakest_node_info[min_ccp_alpha_index]:
            weakest_node_info[decision_node_index] = decision_node
            weakest_node_info[min_ccp_alpha_index] = ccp_alpha

        self._find_weakest_node(left_subtree, weakest_node_info)
        self._find_weakest_node(right_subtree, weakest_node_info)

        return weakest_node_info

    def _prune_tree(self, tree, weakest_node):
        if self._is_leaf_node(tree):
            return tree

        decision_node = next(iter(tree))
        left_subtree, right_subtree = tree[decision_node]
        left_subtree_index, right_subtree_index = 0, 1
        _, leaf_node = weakest_node.split('as_leaf ')

        if weakest_node is decision_node:
            tree = weakest_node
        if weakest_node in left_subtree:
            tree[decision_node][left_subtree_index] = leaf_node
        if weakest_node in right_subtree:
            tree[decision_node][right_subtree_index] = leaf_node

        self._prune_tree(left_subtree, weakest_node)
        self._prune_tree(right_subtree, weakest_node)

        return tree

    def cost_complexity_pruning_path(self, X: pd.DataFrame, y: pd.Series):
        tree = self._grow_tree(X, y)   # grow a full tree
        tree_error_rate, _ = self._tree_error_rate_info(tree, [])
        error_rates = [tree_error_rate]
        ccp_alpha_list = [0.0]

        while not self._is_leaf_node(tree):
            initial_node = [None, np.inf]
            weakest_node, ccp_alpha = self._find_weakest_node(tree, initial_node)
            tree = self._prune_tree(tree, weakest_node)
            tree_error_rate, _ = self._tree_error_rate_info(tree, [])

            error_rates.append(tree_error_rate)
            ccp_alpha_list.append(ccp_alpha)

        return np.array(ccp_alpha_list), np.array(error_rates)

    def _ccp_tree_error_rate(self, tree_error_rate, num_leaf_nodes):

        return tree_error_rate + self.ccp_alpha*num_leaf_nodes   # regularization

    def _optimal_tree(self, X, y):
        tree = self._grow_tree(X, y)   # grow a full tree
        min_RT_alpha, final_tree = np.inf, None

        while not self._is_leaf_node(tree):
            RT, num_leaf_nodes = self._tree_error_rate_info(tree, [])
            current_RT_alpha = self._ccp_tree_error_rate(RT, num_leaf_nodes)

            if current_RT_alpha <= min_RT_alpha:
                min_RT_alpha = current_RT_alpha
                final_tree = deepcopy(tree)

            initial_node = [None, np.inf]
            weakest_node, _ = self._find_weakest_node(tree, initial_node)
            tree = self._prune_tree(tree, weakest_node)

        return final_tree

    def fit(self, X: pd.DataFrame, y: pd.Series):
        self.tree = self._optimal_tree(X, y)

    def _traverse_tree(self, sample, tree):
        if self._is_leaf_node(tree):
            leaf, *_ = tree.split()
            return leaf

        decision_node = next(iter(tree))  # dict key
        left_node, right_node = tree[decision_node]
        feature, other = decision_node.split(' <=')
        threshold, *_ = other.split()
        feature_value = sample[feature]

        if np.float128(feature_value) <= np.float128(threshold):
            next_node = self._traverse_tree(sample, left_node)    # left_node
        else:
            next_node = self._traverse_tree(sample, right_node)   # right_node

        return next_node

    def predict(self, samples: pd.DataFrame):
        # apply traverse_tree method for each row in a dataframe
        results = samples.apply(self._traverse_tree, args=(self.tree,), axis=1)

        return np.array(results.astype(self._y_dtype))

Код для отрисовки графиков

def tree_plot(sklearn_tree, Xa_train):
    plt.figure(figsize=(12, 18))  # customize according to the size of your tree
    plot_tree(sklearn_tree, feature_names=Xa_train.columns, filled=True, precision=6)
    plt.show()


def tree_scores_plot(estimator, ccp_alphas, train_data, test_data, metric, labels):
    train_scores, test_scores = [], []
    X_train, y_train = train_data
    X_test, y_test = test_data
    x_label, y_label = labels

    for ccp_alpha_i in ccp_alphas:
        estimator.ccp_alpha = ccp_alpha_i
        estimator.fit(X_train, y_train)
        train_pred_res = estimator.predict(X_train)
        test_pred_res = estimator.predict(X_test)

        train_score = metric(train_pred_res, y_train)
        test_score = metric(test_pred_res, y_test)
        train_scores.append(train_score)
        test_scores.append(test_score)

    fig, ax = plt.subplots()
    ax.set_xlabel(x_label)
    ax.set_ylabel(y_label)
    ax.set_title(f"{y_label} vs {x_label} for training and testing sets")
    ax.plot(ccp_alphas, train_scores, marker="o", label="train", drawstyle="steps-post")
    ax.plot(ccp_alphas, test_scores, marker="o", label="test", drawstyle="steps-post")
    ax.legend()
    plt.show()


def decision_boundary_plot(X, y, X_train, y_train, clf, feature_indexes, title=None):
    if y.dtype != 'int':
        y = pd.Series(LabelEncoder().fit_transform(y))
        y_train = pd.Series(LabelEncoder().fit_transform(y_train))

    feature1_name, feature2_name = X.columns[feature_indexes]
    X_feature_columns = X.values[:, feature_indexes]
    X_train_feature_columns = X_train.values[:, feature_indexes]
    clf.fit(X_train_feature_columns, y_train.values)

    plot_decision_regions(X=X_feature_columns, y=y.values, clf=clf)
    plt.xlabel(feature1_name)
    plt.ylabel(feature2_name)
    plt.title(title)

Загрузка датасетов

Для обучения моделей будет использован Iris dataset, где необходимо верно определить типы цветков на основе их признаков. В случае регрессии используется load_linnerud dataset из scikit-learn.

df_path = "/content/drive/MyDrive/iris.csv"
iris = pd.read_csv(df_path)
X1, y1 = iris.iloc[:, :-1], iris.iloc[:, -1]
X1_train, X1_test, y1_train, y1_test = train_test_split(X1, y1, test_size=0.3, random_state=0)
print(iris)


     sepal_length  sepal_width  petal_length  petal_width    species
0             5.1          3.5           1.4          0.2     setosa
1             4.9          3.0           1.4          0.2     setosa
2             4.7          3.2           1.3          0.2     setosa
3             4.6          3.1           1.5          0.2     setosa
4             5.0          3.6           1.4          0.2     setosa
..            ...          ...           ...          ...        ...
145           6.7          3.0           5.2          2.3  virginica
146           6.3          2.5           5.0          1.9  virginica
147           6.5          3.0           5.2          2.0  virginica
148           6.2          3.4           5.4          2.3  virginica
149           5.9          3.0           5.1          1.8  virginica

[150 rows x 5 columns]
X2, y2 = load_linnerud(return_X_y=True, as_frame=True)
y2 = y2['Pulse']
X2_train, X2_test, y2_train, y2_test = train_test_split(X2, y2, test_size=0.2, random_state=0)
print(X2, y2, sep='\n')


    Chins  Situps  Jumps
0     5.0   162.0   60.0
1     2.0   110.0   60.0
2    12.0   101.0  101.0
3    12.0   105.0   37.0
4    13.0   155.0   58.0
5     4.0   101.0   42.0
6     8.0   101.0   38.0
7     6.0   125.0   40.0
8    15.0   200.0   40.0
9    17.0   251.0  250.0
10   17.0   120.0   38.0
11   13.0   210.0  115.0
12   14.0   215.0  105.0
13    1.0    50.0   50.0
14    6.0    70.0   31.0
15   12.0   210.0  120.0
16    4.0    60.0   25.0
17   11.0   230.0   80.0
18   15.0   225.0   73.0
19    2.0   110.0   43.0

0     50.0
1     52.0
2     58.0
3     62.0
4     46.0
5     56.0
6     56.0
7     60.0
8     74.0
9     56.0
10    50.0
11    52.0
12    64.0
13    50.0
14    46.0
15    62.0
16    54.0
17    52.0
18    54.0
19    68.0
Name: Pulse, dtype: float64

Обучение моделей и оценка полученных результатов

В случае классификации дерево на данных ирис показало высокую точность. После прунинга точность не увеличилась, зато удалось найти более оптимальное дерево (alpha=0.0143) с такой же точностью и меньшим количеством узлов.

А вот в случае регрессии удалось получить прирост в плане точности, подобрав alpha=3.613, которое создаёт дерево с минимальной ошибкой на тестовом наборе. Все результаты приведены ниже.

Для более удобного отображения полученных результатов в ручных реализациях лучше использовать скачанный ноутбук в самом начале.

Классификация до прунинга

tree_classifier = DecisionTreeCART()
tree_classifier.fit(X1_train, y1_train)
clf_ccp_alphas, _ = tree_classifier.cost_complexity_pruning_path(X1_train, y1_train)
clf_ccp_alphas = clf_ccp_alphas[:-1]

sk_tree_classifier = DecisionTreeClassifier(random_state=0)
sk_tree_classifier.fit(X1_train, y1_train)
sk_clf_path = sk_tree_classifier.cost_complexity_pruning_path(X1_train, y1_train)
sk_clf_ccp_alphas = sk_clf_path.ccp_alphas[:-1]

sk_clf_estimator = DecisionTreeClassifier(random_state=0)
train1_data, test1_data = [X1_train, y1_train], [X1_test, y1_test]
metric = accuracy_score
labels = ['Alpha', 'Accuracy']

pprint(tree_classifier.tree, width=180)
tree_plot(sk_tree_classifier, X1_train)
print(f'tree alphas: {clf_ccp_alphas}', f'sklearn alphas: {sk_clf_ccp_alphas}', sep='\n')
tree_scores_plot(sk_clf_estimator, clf_ccp_alphas, train1_data, test1_data, metric, labels)


{'petal_width <= 0.75 | as_leaf virginica error_rate 0.6643083900226757': ['setosa | error_rate 0.0',
                                                                           {'petal_length <= 4.95 | as_leaf virginica error_rate 0.33480885311871234': [{'petal_width <= 1.65 | as_leaf versicolor error_rate 0.0521008403361345': ['versicolor '
                                                                                                                                                                                                                                    '| '
                                                                                                                                                                                                                                    'error_rate '
                                                                                                                                                                                                                                    '0.0',
                                                                                                                                                                                                                                    {'sepal_width <= 3.1 | as_leaf virginica error_rate 0.014285714285714287': ['virginica '
                                                                                                                                                                                                                                                                                                                '| '
                                                                                                                                                                                                                                                                                                                'error_rate '
                                                                                                                                                                                                                                                                                                                '0.0',
                                                                                                                                                                                                                                                                                                                'versicolor '
                                                                                                                                                                                                                                                                                                                '| '
                                                                                                                                                                                                                                                                                                                'error_rate '
                                                                                                                                                                                                                                                                                                                '0.0']}]},
                                                                                                                                                        {'petal_width <= 1.75 | as_leaf virginica error_rate 0.018532818532818515': [{'petal_width <= 1.65 | as_leaf virginica error_rate 0.014285714285714287': ['virginica '
                                                                                                                                                                                                                                                                                                                  '| '
                                                                                                                                                                                                                                                                                                                  'error_rate '
                                                                                                                                                                                                                                                                                                                  '0.0',
                                                                                                                                                                                                                                                                                                                  'versicolor '
                                                                                                                                                                                                                                                                                                                  '| '
                                                                                                                                                                                                                                                                                                                  'error_rate '
                                                                                                                                                                                                                                                                                                                  '0.0']},
                                                                                                                                                                                                                                     'virginica '
                                                                                                                                                                                                                                     '| '
                                                                                                                                                                                                                                     'error_rate '
                                                                                                                                                                                                                                     '0.0']}]}]}
aee7bf13a2513a81096daeee904619dd.png
tree alphas: [0.         0.00926641 0.01428571 0.03781513 0.26417519]
sklearn alphas: [0.         0.00926641 0.01428571 0.03781513 0.26417519]
b8f2125685fd271a643cb3648fc7bec8.png

Классификация после прунинга

tree_clf_prediction = tree_classifier.predict(X1_test)
tree_clf_accuracy = accuracy_score(tree_clf_prediction, y1_test)
sk_tree_clf_prediction = sk_tree_classifier.predict(X1_test)
sk_clf_accuracy = accuracy_score(sk_tree_clf_prediction, y1_test)

best_clf_ccp_alpha = 0.0143 # based on a plot
best_tree_classifier = DecisionTreeCART(ccp_alpha=best_clf_ccp_alpha)
best_tree_classifier.fit(X1_train, y1_train)
best_tree_clf_prediction = best_tree_classifier.predict(X1_test)
best_tree_clf_accuracy = accuracy_score(best_tree_clf_prediction, y1_test)

best_sk_tree_classifier = DecisionTreeClassifier(random_state=0, ccp_alpha=best_clf_ccp_alpha)
best_sk_tree_classifier.fit(X1_train, y1_train)
best_sk_tree_clf_prediction = best_sk_tree_classifier.predict(X1_test)
best_sk_clf_accuracy = accuracy_score(best_sk_tree_clf_prediction, y1_test)

print('tree prediction', tree_clf_prediction, ' ', sep='\n')
print('sklearn prediction', sk_tree_clf_prediction, ' ', sep='\n')
print('best tree prediction', best_tree_clf_prediction, ' ', sep='\n')
print('best sklearn prediction', best_sk_tree_clf_prediction, ' ', sep='\n')

pprint(best_tree_classifier.tree, width=180)
tree_plot(best_sk_tree_classifier, X1_train)
print(f'our tree pruning accuracy: before {tree_clf_accuracy} -> after {best_tree_clf_accuracy}')
print(f'sklearn tree pruning accuracy: before {sk_clf_accuracy} -> after {best_sk_clf_accuracy}')


tree prediction
['virginica' 'versicolor' 'setosa' 'virginica' 'setosa' 'virginica'
 'setosa' 'versicolor' 'versicolor' 'versicolor' 'virginica' 'versicolor'
 'versicolor' 'versicolor' 'versicolor' 'setosa' 'versicolor' 'versicolor'
 'setosa' 'setosa' 'virginica' 'versicolor' 'setosa' 'setosa' 'virginica'
 'setosa' 'setosa' 'versicolor' 'versicolor' 'setosa' 'virginica'
 'versicolor' 'setosa' 'virginica' 'virginica' 'versicolor' 'setosa'
 'virginica' 'versicolor' 'versicolor' 'virginica' 'setosa' 'virginica'
 'setosa' 'setosa']
 
sklearn prediction
['virginica' 'versicolor' 'setosa' 'virginica' 'setosa' 'virginica'
 'setosa' 'versicolor' 'versicolor' 'versicolor' 'virginica' 'versicolor'
 'versicolor' 'versicolor' 'versicolor' 'setosa' 'versicolor' 'versicolor'
 'setosa' 'setosa' 'virginica' 'versicolor' 'setosa' 'setosa' 'virginica'
 'setosa' 'setosa' 'versicolor' 'versicolor' 'setosa' 'virginica'
 'versicolor' 'setosa' 'virginica' 'virginica' 'versicolor' 'setosa'
 'virginica' 'versicolor' 'versicolor' 'virginica' 'setosa' 'virginica'
 'setosa' 'setosa']
 
best tree prediction
['virginica' 'versicolor' 'setosa' 'virginica' 'setosa' 'virginica'
 'setosa' 'versicolor' 'versicolor' 'versicolor' 'virginica' 'versicolor'
 'versicolor' 'versicolor' 'versicolor' 'setosa' 'versicolor' 'versicolor'
 'setosa' 'setosa' 'virginica' 'versicolor' 'setosa' 'setosa' 'virginica'
 'setosa' 'setosa' 'versicolor' 'versicolor' 'setosa' 'virginica'
 'versicolor' 'setosa' 'virginica' 'virginica' 'versicolor' 'setosa'
 'virginica' 'versicolor' 'versicolor' 'virginica' 'setosa' 'virginica'
 'setosa' 'setosa']
 
best sklearn prediction
['virginica' 'versicolor' 'setosa' 'virginica' 'setosa' 'virginica'
 'setosa' 'versicolor' 'versicolor' 'versicolor' 'virginica' 'versicolor'
 'versicolor' 'versicolor' 'versicolor' 'setosa' 'versicolor' 'versicolor'
 'setosa' 'setosa' 'virginica' 'versicolor' 'setosa' 'setosa' 'virginica'
 'setosa' 'setosa' 'versicolor' 'versicolor' 'setosa' 'virginica'
 'versicolor' 'setosa' 'virginica' 'virginica' 'versicolor' 'setosa'
 'virginica' 'versicolor' 'versicolor' 'virginica' 'setosa' 'virginica'
 'setosa' 'setosa']
 
{'petal_width <= 0.75 | as_leaf virginica error_rate 0.6643083900226757': ['setosa | error_rate 0.0',
                                                                           {'petal_length <= 4.95 | as_leaf virginica error_rate 0.33480885311871234': [{'petal_width <= 1.65 | as_leaf versicolor error_rate 0.0521008403361345': ['versicolor '
                                                                                                                                                                                                                                    '| '
                                                                                                                                                                                                                                    'error_rate '
                                                                                                                                                                                                                                    '0.0',
                                                                                                                                                                                                                                    'virginica '
                                                                                                                                                                                                                                    'error_rate '
                                                                                                                                                                                                                                    '0.014285714285714287']},
                                                                                                                                                        'virginica error_rate '
                                                                                                                                                        '0.018532818532818515']}]}
ecd32db3e767594cb89451b7a056ece9.png
our tree pruning accuracy: before 0.9777777777777777 -> after 0.9777777777777777
sklearn tree pruning accuracy: before 0.9777777777777777 -> after 0.9777777777777777

Визуализация решающих границ до и после прунинга

feature_indexes = [2, 3]
title1 = 'Classification tree surface before pruning'
decision_boundary_plot(X1, y1, X1_train, y1_train, sk_tree_classifier, feature_indexes, title1)
9685bb39817a4c79832ffc7d58931adb.png
feature_indexes = [2, 3]
title2 = 'Classification tree surface after pruning'
decision_boundary_plot(X1, y1, X1_train, y1_train, best_sk_tree_classifier, feature_indexes, title2)
d306d08d6cece8361e15a528899f49cd.png
feature_indexes = [2, 3]
plt.figure(figsize=(10, 15))

for i, alpha in enumerate(clf_ccp_alphas):
    sk_tree_clf = DecisionTreeClassifier(random_state=0, ccp_alpha=alpha)
    plt.subplot(3, 2, i + 1)
    plt.subplots_adjust(hspace=0.5)
    title = f'ccp_alpha = {alpha}'
    decision_boundary_plot(X1, y1, X1_train, y1_train, sk_tree_clf, feature_indexes, title)
Прунинг при всех полученных alpha ccp
Прунинг при всех полученных alpha ccp

Регрессия до прунинга

tree_regressor = DecisionTreeCART(regression=True)
tree_regressor.fit(X2_train, y2_train)
reg_ccp_alphas, _ = tree_regressor.cost_complexity_pruning_path(X2_train, y2_train)
reg_ccp_alphas = reg_ccp_alphas[:-1]

sk_tree_regressor = DecisionTreeRegressor(random_state=0)
sk_tree_regressor.fit(X2_train, y2_train)
sk_reg_path = sk_tree_regressor.cost_complexity_pruning_path(X2_train, y2_train)
sk_reg_ccp_alphas = sk_reg_path.ccp_alphas[:-1]

reg_estimator = DecisionTreeCART(regression=True)
sk_reg_estimator = DecisionTreeRegressor(random_state=0)
train2_data, test2_data = [X2_train, y2_train], [X2_test, y2_test]
metric = mean_absolute_percentage_error
labels = ['Alpha', 'MAPE']

pprint(tree_regressor.tree)
tree_plot(sk_tree_regressor, X2_train)

print(f'CART alphas: {reg_ccp_alphas}')
tree_scores_plot(reg_estimator, reg_ccp_alphas, train2_data, test2_data, metric, labels)
print(f'sklearn_alphas: {sk_reg_ccp_alphas}')
tree_scores_plot(sk_reg_estimator, sk_reg_ccp_alphas, train2_data, test2_data, metric, labels)


{'Jumps <= 90.5 | as_leaf 54.625 error_rate 29.359375': [{'Jumps <= 46.0 | as_leaf 52.90909090909091 error_rate 17.181818181818183': [{'Jumps <= 34.0 | as_leaf 54.857142857142854 error_rate 11.428571428571429': [{'Jumps <= 28.0 | as_leaf 50.0 error_rate 2.0': ['54.0 '
                                                                                                                                                                                                                                                                     '| '
                                                                                                                                                                                                                                                                     'error_rate '
                                                                                                                                                                                                                                                                     '0.0',
                                                                                                                                                                                                                                                                     '46.0 '
                                                                                                                                                                                                                                                                     '| '
                                                                                                                                                                                                                                                                     'error_rate '
                                                                                                                                                                                                                                                                     '0.0']},
                                                                                                                                                                                                                    {'Chins <= 14.5 | as_leaf 56.8 error_rate 5.3': [{'Situps <= 103.0 | as_leaf 58.5 error_rate 1.6875': ['56.0 '
                                                                                                                                                                                                                                                                                                                           '| '
                                                                                                                                                                                                                                                                                                                           'error_rate '
                                                                                                                                                                                                                                                                                                                           '0.0',
                                                                                                                                                                                                                                                                                                                           {'Jumps <= 38.5 | as_leaf 61.0 error_rate 0.125': ['62.0 '
                                                                                                                                                                                                                                                                                                                                                                              '| '
                                                                                                                                                                                                                                                                                                                                                                              'error_rate '
                                                                                                                                                                                                                                                                                                                                                                              '0.0',
                                                                                                                                                                                                                                                                                                                                                                              '60.0 '
                                                                                                                                                                                                                                                                                                                                                                              '| '
                                                                                                                                                                                                                                                                                                                                                                              'error_rate '
                                                                                                                                                                                                                                                                                                                                                                              '0.0']}]},
                                                                                                                                                                                                                                                                     '50.0 '
                                                                                                                                                                                                                                                                     '| '
                                                                                                                                                                                                                                                                     'error_rate '
                                                                                                                                                                                                                                                                     '0.0']}]},
                                                                                                                                      {'Chins <= 12.0 | as_leaf 49.5 error_rate 1.1875': [{'Jumps <= 70.0 | as_leaf 50.666666666666664 error_rate 0.16666666666666666': ['50.0 '
                                                                                                                                                                                                                                                                         '| '
                                                                                                                                                                                                                                                                         'error_rate '
                                                                                                                                                                                                                                                                         '0.0',
                                                                                                                                                                                                                                                                         '52.0 '
                                                                                                                                                                                                                                                                         '| '
                                                                                                                                                                                                                                                                         'error_rate '
                                                                                                                                                                                                                                                                         '0.0']},
                                                                                                                                                                                          '46.0 '
                                                                                                                                                                                          '| '
                                                                                                                                                                                          'error_rate '
                                                                                                                                                                                          '0.0']}]},
                                                         {'Jumps <= 110.0 | as_leaf 58.4 error_rate 5.7': [{'Jumps <= 103.0 | as_leaf 61.0 error_rate 1.125': ['58.0 '
                                                                                                                                                               '| '
                                                                                                                                                               'error_rate '
                                                                                                                                                               '0.0',
                                                                                                                                                               '64.0 '
                                                                                                                                                               '| '
                                                                                                                                                               'error_rate '
                                                                                                                                                               '0.0']},
                                                                                                           {'Chins <= 12.5 | as_leaf 56.666666666666664 error_rate 3.1666666666666665': ['62.0 '
                                                                                                                                                                                         '| '
                                                                                                                                                                                         'error_rate '
                                                                                                                                                                                         '0.0',
                                                                                                                                                                                         {'Jumps <= 182.5 | as_leaf 54.0 error_rate 0.5': ['52.0 '
                                                                                                                                                                                                                                           '| '
                                                                                                                                                                                                                                           'error_rate '
                                                                                                                                                                                                                                           '0.0',
                                                                                                                                                                                                                                           '56.0 '
                                                                                                                                                                                                                                           '| '
                                                                                                                                                                                                                                           'error_rate '
                                                                                                                                                                                                                                           '0.0']}]}]}]}
aafcd0df46c2293ea43064a7920ebf24.png
CART alphas: [0.         0.125      0.16666667 0.5        1.02083333 1.125
 1.5625     2.         2.0375     3.6125     4.12857143 4.56574675]
6dd0f98cdaf47e8bbfe31428f1efaf9e.png
sklearn_alphas: [0.         0.125      0.16666667 0.5        1.02083333 1.125
 1.5625     2.         2.0375     3.6125     4.12857143 4.56574675]
315a15d24ed26bad2558fbef53fe8184.png

Регрессия после прунинга

tree_reg_prediction = tree_regressor.predict(X2_test)
tree_reg_error = mean_absolute_percentage_error(tree_reg_prediction, y2_test)
sk_tree_reg_prediction = sk_tree_regressor.predict(X2_test)
sk_reg_error= mean_absolute_percentage_error(sk_tree_reg_prediction, y2_test)

best_reg_ccp_alpha = 3.613   # based on a plot
best_tree_regressor = DecisionTreeCART(ccp_alpha=best_reg_ccp_alpha, regression=True)
best_tree_regressor.fit(X2_train, y2_train)
best_tree_reg_prediction = best_tree_regressor.predict(X2_test)
lowest_tree_reg_error = mean_absolute_percentage_error(best_tree_reg_prediction, y2_test)

best_sk_tree_regressor = DecisionTreeRegressor(random_state=0, ccp_alpha=best_reg_ccp_alpha)
best_sk_tree_regressor.fit(X2_train, y2_train)
best_sk_tree_reg_prediction = best_sk_tree_regressor.predict(X2_test)
lowest_sk_reg_error = mean_absolute_percentage_error(best_sk_tree_reg_prediction, y2_test)

print('tree prediction', tree_reg_prediction, ' ', sep='\n')
print('sklearn prediction', sk_tree_reg_prediction, ' ', sep='\n')
print('best tree prediction', best_tree_reg_prediction, ' ', sep='\n')
print('best sklearn prediction', best_sk_tree_reg_prediction, ' ', sep='\n')

pprint(best_tree_regressor.tree)
tree_plot(best_sk_tree_regressor, X2_train)
print(f'tree error: before {tree_reg_error} -> after pruning {lowest_tree_reg_error}')
print(f'sklearn tree error: before {sk_reg_error} -> after pruning {lowest_sk_reg_error}')


tree prediction
[46. 50. 60. 50.]
 
sklearn prediction
[46. 50. 60. 50.]
 
best tree prediction
[49.5 49.5 56.8 56.8]
 
best sklearn prediction
[49.5 49.5 56.8 56.8]
 
{'Jumps <= 90.5 | as_leaf 54.625 error_rate 29.359375': [{'Jumps <= 46.0 | as_leaf 52.90909090909091 error_rate 17.181818181818183': [{'Jumps <= 34.0 | as_leaf 54.857142857142854 error_rate 11.428571428571429': ['50.0 '
                                                                                                                                                                                                                    'error_rate '
                                                                                                                                                                                                                    '2.0',
                                                                                                                                                                                                                    '56.8 '
                                                                                                                                                                                                                    'error_rate '
                                                                                                                                                                                                                    '5.3']},
                                                                                                                                      '49.5 '
                                                                                                                                      'error_rate '
                                                                                                                                      '1.1875']},
                                                         '58.4 error_rate 5.7']}
ceadb36d233fd9d10441f1d777b97f16.png
tree error: before 0.20681159420289855 -> after pruning 0.16035353535353536
sklearn tree error: before 0.20681159420289855 -> after pruning 0.1603535353535354

Преимущества и недостатки дерева решений

Преимущества:

  • простота в интерпретации и визуализации;

  • неплохая работа с нелинейными зависимостями в данных;

  • не требуется особой подготовки тренировочного набора;

  • относительно высокая скорость обучения и прогнозирования.

Недостатки:

  • поиск оптимального дерева является NP-полной задачей;

  • нестабильность работы даже при небольшом изменении данных;

  • возможность переобучения из-за чувствительности к шуму и выбросам в данных.

Дополнительные источники

Статья «The CART Decision Tree for Mining Data Streams», Leszek Rutkowskia, Maciej Jaworskia, Lena Pietruczuka, Piotr Dudaa.

Документация:

Лекции: один, два, три, четыре.

Пошаговое построение дерева решений: один, два, три.

Pruning:


Бэггинг и случайный лес 🡆

Источник

  • 07.09.23 16:24 CherryTeam

    Cherry Team atlyginimų skaičiavimo programa yra labai naudingas įrankis įmonėms, kai reikia efektyviai valdyti ir skaičiuoti darbuotojų atlyginimus. Ši programinė įranga, turinti išsamias funkcijas ir patogią naudotojo sąsają, suteikia daug privalumų, kurie padeda supaprastinti darbo užmokesčio skaičiavimo procesus ir pagerinti finansų valdymą. Štai keletas pagrindinių priežasčių, kodėl Cherry Team atlyginimų skaičiavimo programa yra naudinga įmonėms: Automatizuoti ir tikslūs skaičiavimai: Atlyginimų skaičiavimai rankiniu būdu gali būti klaidingi ir reikalauti daug laiko. Programinė įranga Cherry Team automatizuoja visą atlyginimų skaičiavimo procesą, todėl nebereikia atlikti skaičiavimų rankiniu būdu ir sumažėja klaidų rizika. Tiksliai apskaičiuodama atlyginimus, įskaitant tokius veiksnius, kaip pagrindinis atlyginimas, viršvalandžiai, premijos, išskaitos ir mokesčiai, programa užtikrina tikslius ir be klaidų darbo užmokesčio skaičiavimo rezultatus. Sutaupoma laiko ir išlaidų: Darbo užmokesčio valdymas gali būti daug darbo jėgos reikalaujanti užduotis, reikalaujanti daug laiko ir išteklių. Programa Cherry Team supaprastina ir pagreitina darbo užmokesčio skaičiavimo procesą, nes automatizuoja skaičiavimus, generuoja darbo užmokesčio žiniaraščius ir tvarko išskaičiuojamus mokesčius. Šis automatizavimas padeda įmonėms sutaupyti daug laiko ir pastangų, todėl žmogiškųjų išteklių ir finansų komandos gali sutelkti dėmesį į strategiškai svarbesnę veiklą. Be to, racionalizuodamos darbo užmokesčio operacijas, įmonės gali sumažinti administracines išlaidas, susijusias su rankiniu darbo užmokesčio tvarkymu. Mokesčių ir darbo teisės aktų laikymasis: Įmonėms labai svarbu laikytis mokesčių ir darbo teisės aktų, kad išvengtų baudų ir teisinių problemų. Programinė įranga Cherry Team seka besikeičiančius mokesčių įstatymus ir darbo reglamentus, užtikrindama tikslius skaičiavimus ir teisinių reikalavimų laikymąsi. Programa gali dirbti su sudėtingais mokesčių scenarijais, pavyzdžiui, keliomis mokesčių grupėmis ir įvairių rūšių atskaitymais, todėl užtikrina atitiktį reikalavimams ir kartu sumažina klaidų riziką. Ataskaitų rengimas ir analizė: Programa Cherry Team siūlo patikimas ataskaitų teikimo ir analizės galimybes, suteikiančias įmonėms vertingų įžvalgų apie darbo užmokesčio duomenis. Ji gali generuoti ataskaitas apie įvairius aspektus, pavyzdžiui, darbo užmokesčio paskirstymą, išskaičiuojamus mokesčius ir darbo sąnaudas. Šios ataskaitos leidžia įmonėms analizuoti darbo užmokesčio tendencijas, nustatyti tobulintinas sritis ir priimti pagrįstus finansinius sprendimus. Pasinaudodamos duomenimis pagrįstomis įžvalgomis, įmonės gali optimizuoti savo darbo užmokesčio strategijas ir veiksmingai kontroliuoti išlaidas. Integracija su kitomis sistemomis: Cherry Team programinė įranga dažnai sklandžiai integruojama su kitomis personalo ir apskaitos sistemomis. Tokia integracija leidžia automatiškai perkelti atitinkamus duomenis, pavyzdžiui, informaciją apie darbuotojus ir finansinius įrašus, todėl nebereikia dubliuoti duomenų. Supaprastintas duomenų srautas tarp sistemų padidina bendrą efektyvumą ir sumažina duomenų klaidų ar neatitikimų riziką. Cherry Team atlyginimų apskaičiavimo programa įmonėms teikia didelę naudą - automatiniai ir tikslūs skaičiavimai, laiko ir sąnaudų taupymas, atitiktis mokesčių ir darbo teisės aktų reikalavimams, ataskaitų teikimo ir analizės galimybės bei integracija su kitomis sistemomis. Naudodamos šią programinę įrangą įmonės gali supaprastinti darbo užmokesčio skaičiavimo procesus, užtikrinti tikslumą ir atitiktį reikalavimams, padidinti darbuotojų pasitenkinimą ir gauti vertingų įžvalgų apie savo finansinius duomenis. Programa Cherry Team pasirodo esanti nepakeičiamas įrankis įmonėms, siekiančioms efektyviai ir veiksmingai valdyti darbo užmokestį. https://cherryteam.lt/lt/

  • 08.10.23 01:30 davec8080

    The "Shibarium for this confirmed rug pull is a BEP-20 project not related at all to Shibarium, SHIB, BONE or LEASH. The Plot Thickens. Someone posted the actual transactions!!!! https://bscscan.com/tx/0xa846ea0367c89c3f0bbfcc221cceea4c90d8f56ead2eb479d4cee41c75e02c97 It seems the article is true!!!! And it's also FUD. Let me explain. Check this link: https://bscscan.com/token/0x5a752c9fe3520522ea88f37a41c3ddd97c022c2f So there really is a "Shibarium" token. And somebody did a rug pull with it. CONFIRMED. But the "Shibarium" token for this confirmed rug pull is a BEP-20 project not related at all to Shibarium, SHIB, BONE or LEASH.

  • 24.06.24 04:31 tashandiarisha

    Web-site. https://trustgeekshackexpert.com/ Tele-Gram, trustgeekshackexpert During the pandemic, I ventured into the world of cryptocurrency trading. My father loaned me $10,000, which I used to purchase my first bitcoins. With diligent research and some luck, I managed to grow my investment to over $350,000 in just a couple of years. I was thrilled with my success, but my excitement was short-lived when I decided to switch brokers and inadvertently fell victim to a phishing attack. While creating a new account, I received what seemed like a legitimate email requesting verification. Without second-guessing, I provided my information, only to realize later that I had lost access to my email and cryptocurrency wallets. Panic set in as I watched my hard-earned assets disappear before my eyes. Desperate to recover my funds, I scoured the internet for solutions. That's when I stumbled upon the Trust Geeks Hack Expert on the Internet. The service claimed to specialize in recovering lost crypto assets, and I decided to take a chance. Upon contacting them, the team swung into action immediately. They guided me through the entire recovery process with professionalism and efficiency. The advantages of using the Trust Geeks Hack Expert Tool became apparent from the start. Their team was knowledgeable and empathetic, understanding the urgency and stress of my situation. They employed advanced security measures to ensure my information was handled safely and securely. One of the key benefits of the Trust Geeks Hack Expert Tool was its user-friendly interface, which made a complex process much more manageable for someone like me, who isn't particularly tech-savvy. They also offered 24/7 support, so I never felt alone during recovery. Their transparent communication and regular updates kept me informed and reassured throughout. The Trust Geeks Hack Expert Tool is the best solution for anyone facing similar issues. Their swift response, expertise, and customer-centric approach set them apart from other recovery services. Thanks to their efforts, I regained access to my accounts and my substantial crypto assets. The experience taught me a valuable lesson about online security and showed me the incredible potential of the Trust Geeks Hack Expert Tool. Email:: trustgeekshackexpert{@}fastservice{.}com WhatsApp  + 1.7.1.9.4.9.2.2.6.9.3

  • 26.06.24 18:46 Jacobethannn098

    LEGAL RECOUP FOR CRYPTO THEFT BY ADRIAN LAMO HACKER

  • 26.06.24 18:46 Jacobethannn098

    Reach Out To Adrian Lamo Hacker via email: [email protected] / WhatsApp: ‪+1 (909) 739‑0269‬ Adrian Lamo Hacker is a formidable force in the realm of cybersecurity, offering a comprehensive suite of services designed to protect individuals and organizations from the pervasive threat of digital scams and fraud. With an impressive track record of recovering over $950 million, including substantial sums from high-profile scams such as a $600 million fake investment platform and a $1.5 million romance scam, Adrian Lamo Hacker has established itself as a leader in the field. One of the key strengths of Adrian Lamo Hacker lies in its unparalleled expertise in scam detection. The company leverages cutting-edge methodologies to defend against a wide range of digital threats, including phishing emails, fraudulent websites, and deceitful schemes. This proactive approach to identifying and neutralizing potential scams is crucial in an increasingly complex and interconnected digital landscape. Adrian Lamo Hacker's tailored risk assessments serve as a powerful tool for fortifying cybersecurity. By identifying vulnerabilities and potential points of exploitation, the company empowers its clients to take proactive measures to strengthen their digital defenses. This personalized approach to risk assessment ensures that each client receives targeted and effective protection against cyber threats. In the event of a security incident, Adrian Lamo Hacker's rapid incident response capabilities come into play. The company's vigilant monitoring and swift mitigation strategies ensure that any potential breaches or scams are addressed in real-time, minimizing the impact on its clients' digital assets and reputation. This proactive stance towards incident response is essential in an era where cyber threats can materialize with alarming speed and sophistication. In addition to its robust defense and incident response capabilities, Adrian Lamo Hacker is committed to empowering its clients to recognize and thwart common scam tactics. By fostering enlightenment in the digital realm, the company goes beyond simply safeguarding its clients; it equips them with the knowledge and awareness needed to navigate the digital landscape with confidence and resilience. Adrian Lamo Hacker services extend to genuine hacking, offering an additional layer of protection for its clients. This may include ethical hacking or penetration testing, which can help identify and address security vulnerabilities before malicious actors have the chance to exploit them. By offering genuine hacking services, Adrian Lamo Hacker demonstrates its commitment to providing holistic cybersecurity solutions that address both defensive and offensive aspects of digital protection. Adrian Lamo Hacker stands out as a premier provider of cybersecurity services, offering unparalleled expertise in scam detection, rapid incident response, tailored risk assessments, and genuine hacking capabilities. With a proven track record of recovering significant sums from various scams, the company has earned a reputation for excellence in combating digital fraud. Through its proactive and empowering approach, Adrian Lamo Hacker is a true ally for individuals and organizations seeking to navigate the digital realm with confidence.

  • 04.07.24 04:49 ZionNaomi

    For over twenty years, I've dedicated myself to the dynamic world of marketing, constantly seeking innovative strategies to elevate brand visibility in an ever-evolving landscape. So when the meteoric rise of Bitcoin captured my attention as a potential avenue for investment diversification, I seized the opportunity, allocating $20,000 to the digital currency. Witnessing my investment burgeon to an impressive $70,000 over time instilled in me a sense of financial promise and stability.However, amidst the euphoria of financial growth, a sudden and unforeseen oversight brought me crashing back to reality during a critical business trip—I had misplaced my hardware wallet. The realization that I had lost access to the cornerstone of my financial security struck me with profound dismay. Desperate for a solution, I turned to the expertise of Daniel Meuli Web Recovery.Their response was swift . With meticulous precision, they embarked on the intricate process of retracing the elusive path of my lost funds. Through their unwavering dedication, they managed to recover a substantial portion of my investment, offering a glimmer of hope amidst the shadows of uncertainty. The support provided by Daniel Meuli Web Recovery extended beyond mere financial restitution. Recognizing the imperative of fortifying against future vulnerabilities, they generously shared invaluable insights on securing digital assets. Their guidance encompassed crucial aspects such as implementing hardware wallet backups and fortifying security protocols, equipping me with recovered funds and newfound knowledge to navigate the digital landscape securely.In retrospect, this experience served as a poignant reminder of the critical importance of diligence and preparedness in safeguarding one's assets. Thanks to the expertise and unwavering support extended by Daniel Meuli Web Recovery, I emerged from the ordeal with renewed resilience and vigilance. Empowered by their guidance and fortified by enhanced security measures, I now approach the future with unwavering confidence.The heights of financial promise to the depths of loss and back again has been a humbling one, underscoring the volatility and unpredictability inherent in the digital realm. Yet, through adversity, I have emerged stronger, armed with a newfound appreciation for the importance of diligence, preparedness, and the invaluable support of experts like Daniel Meuli Web Recovery.As I persist in traversing the digital landscape, I do so with a judicious blend of vigilance and fortitude, cognizant that with adequate safeguards and the backing of reliable confidants, I possess the fortitude to withstand any adversity that may arise. For this, I remain eternally appreciative. Email Danielmeuliweberecovery @ email . c om WhatsApp + 393 512 013 528

  • 13.07.24 21:13 michaelharrell825

    In 2020, amidst the economic fallout of the pandemic, I found myself unexpectedly unemployed and turned to Forex trading in hopes of stabilizing my finances. Like many, I was drawn in by the promise of quick returns offered by various Forex robots, signals, and trading advisers. However, most of these products turned out to be disappointing, with claims that were far from reality. Looking back, I realize I should have been more cautious, but the allure of financial security clouded my judgment during those uncertain times. Amidst these disappointments, Profit Forex emerged as a standout. Not only did they provide reliable service, but they also delivered tangible results—a rarity in an industry often plagued by exaggerated claims. The positive reviews from other users validated my own experience, highlighting their commitment to delivering genuine outcomes and emphasizing sound financial practices. My journey with Profit Forex led to a net profit of $11,500, a significant achievement given the challenges I faced. However, my optimism was short-lived when I encountered obstacles trying to withdraw funds from my trading account. Despite repeated attempts, I found myself unable to access my money, leaving me frustrated and uncertain about my financial future. Fortunately, my fortunes changed when I discovered PRO WIZARD GIlBERT RECOVERY. Their reputation for recovering funds from fraudulent schemes gave me hope in reclaiming what was rightfully mine. With a mixture of desperation and cautious optimism, I reached out to them for assistance. PRO WIZARD GIlBERT RECOVERY impressed me from the start with their professionalism and deep understanding of financial disputes. They took a methodical approach, using advanced techniques to track down the scammers responsible for withholding my funds. Throughout the process, their communication was clear and reassuring, providing much-needed support during a stressful period. Thanks to PRO WIZARD GIlBERT RECOVERY's expertise and unwavering dedication, I finally achieved a resolution to my ordeal. They successfully traced and retrieved my funds, restoring a sense of justice and relief. Their intervention not only recovered my money but also renewed my faith in ethical financial services. Reflecting on my experience, I've learned invaluable lessons about the importance of due diligence and discernment in navigating the Forex market. While setbacks are inevitable, partnering with reputable recovery specialists like PRO WIZARD GIlBERT RECOVERY can make a profound difference. Their integrity and effectiveness have left an indelible mark on me, guiding my future decisions and reinforcing the value of trustworthy partnerships in achieving financial goals. I wholeheartedly recommend PRO WIZARD GIlBERT RECOVERY to anyone grappling with financial fraud or disputes. Their expertise and commitment to client satisfaction are unparalleled, offering a beacon of hope in challenging times. Thank you, PRO WIZARD GIlBERT RECOVERY, for your invaluable assistance in reclaiming what was rightfully mine. Your service not only recovered my funds but also restored my confidence in navigating the complexities of financial markets with greater caution and awareness. Email: prowizardgilbertrecovery(@)engineer.com Homepage: https://prowizardgilbertrecovery.xyz WhatsApp: +1 (516) 347‑9592

  • 17.07.24 02:26 thompsonrickey

    In the vast and often treacherous realm of online investments, I was entangled in a web of deceit that cost me nearly  $45,000. It all started innocuously enough with an enticing Instagram profile promising lucrative returns through cryptocurrency investment. Initially, everything seemed promising—communications were smooth, and assurances were plentiful. However, as time passed, my optimism turned to suspicion. Withdrawal requests were met with delays and excuses. The once-responsive "investor" vanished into thin air, leaving me stranded with dwindling hopes and a sinking feeling in my gut. It became painfully clear that I had been duped by a sophisticated scheme designed to exploit trust and naivety. Desperate to recover my funds, I turned to online forums where I discovered numerous testimonials advocating for Muyern Trust Hacker. With nothing to lose, I contacted them, recounting my ordeal with a mixture of skepticism and hope. Their swift response and professional demeanor immediately reassured me that I had found a lifeline amidst the chaos. Muyern Trust Hacker wasted no time in taking action. They meticulously gathered evidence, navigated legal complexities, and deployed their expertise to expedite recovery. In what felt like a whirlwind of activity, although the passage of time was a blur amidst my anxiety, they achieved the seemingly impossible—my stolen funds were returned. The relief I felt was overwhelming. Muyern Trust Hacker not only restored my financial losses but also restored my faith in justice. Their commitment to integrity and their relentless pursuit of resolution were nothing short of remarkable. They proved themselves as recovery specialists and guardians against digital fraud, offering hope to victims like me who had been ensnared by deception. My gratitude knows no bounds for Muyern Trust Hacker. Reach them at muyerntrusted @ m a i l - m e . c o m AND Tele gram @ muyerntrusthackertech

  • 18.07.24 20:13 austinagastya

    I Testify For iBolt Cyber Hacker Alone - For Crypto Recovery Service I highly suggest iBolt Cyber Hacker to anyone in need of bitcoin recovery services. They successfully recovered my bitcoin from a fake trading scam with speed and efficiency. This crew is trustworthy, They kept me updated throughout the procedure. I thought my bitcoin was gone, I am so grateful for their help, If you find yourself in a similar circumstance, do not hesitate to reach out to iBolt Cyber Hacker for assistance. Thank you, iBOLT, for your amazing customer service! Please be cautious and contact them directly through their website. Email: S u p p o r t @ ibolt cyber hack . com Cont/Whtp + 3. .9 .3. .5..0. .9. 2. 9. .0 .3. 1 .8. Website: h t t p s : / / ibolt cyber hack . com /

  • 27.08.24 12:50 James889900

    All you need is to hire an expert to help you accomplish that. If there’s any need to spy on your partner’s phone. From my experience I lacked evidence to confront my husband on my suspicion on his infidelity, until I came across ETHICALAHCKERS which many commend him of assisting them in their spying mission. So I contacted him and he provided me with access into his phone to view all text messages, call logs, WhatsApp messages and even her location. This evidence helped me move him off my life . I recommend you consult ETHICALHACKERS009 @ gmail.com OR CALL/TEXT ‪+1(716) 318-5536 or whatsapp +14106350697 if you need access to your partner’s phone

  • 27.08.24 13:06 James889900

    All you need is to hire an expert to help you accomplish that. If there’s any need to spy on your partner’s phone. From my experience I lacked evidence to confront my husband on my suspicion on his infidelity, until I came across ETHICALAHCKERS which many commend him of assisting them in their spying mission. So I contacted him and he provided me with access into his phone to view all text messages, call logs, WhatsApp messages and even her location. This evidence helped me move him off my life . I recommend you consult ETHICALHACKERS009 @ gmail.com OR CALL/TEXT ‪+1(716) 318-5536 or whatsapp +14106350697 if you need access to your partner’s phone

  • 02.09.24 20:24 [email protected]

    If You Need Hacker To Recover Your Bitcoin Contact Paradox Recovery Wizard Paradox Recovery Wizard successfully recovered $123,000 worth of Bitcoin for my husband, which he had lost due to a security breach. The process was efficient and secure, with their expert team guiding us through each step. They were able to trace and retrieve the lost cryptocurrency, restoring our peace of mind and financial stability. Their professionalism and expertise were instrumental in recovering our assets, and we are incredibly grateful for their service. Email: support@ paradoxrecoverywizard.com Email: paradox_recovery @cyberservices.com Wep: https://paradoxrecoverywizard.com/ WhatsApp: +39 351 222 3051.

  • 06.09.24 01:35 Celinagarcia

    HOW TO RECOVER MONEY LOST IN BITCOIN/USDT TRADING OR TO CRYPTO INVESTMENT !! Hi all, friends and families. I am writing From Alberton Canada. Last year I tried to invest in cryptocurrency trading in 2023, but lost a significant amount of money to scammers. I was cheated of my money, but thank God, I was referred to Hack Recovery Wizard they are among the best bitcoin recovery specialists on the planet. they helped me get every penny I lost to the scammers back to me with their forensic techniques. and I would like to take this opportunity to advise everyone to avoid making cryptocurrency investments online. If you ​​​​​​have already lost money on forex, cryptocurrency or Ponzi schemes, please contact [email protected] or WhatsApp: +1 (757) 237–1724 at once they can help you get back the crypto you lost to scammers. BEST WISHES. Celina Garcia.

  • 06.09.24 01:44 Celinagarcia

    HOW TO RECOVER MONEY LOST IN BITCOIN/USDT TRADING OR TO CRYPTO INVESTMENT !! Hi all, friends and families. I am writing From Alberton Canada. Last year I tried to invest in cryptocurrency trading in 2023, but lost a significant amount of money to scammers. I was cheated of my money, but thank God, I was referred to Hack Recovery Wizard they are among the best bitcoin recovery specialists on the planet. they helped me get every penny I lost to the scammers back to me with their forensic techniques. and I would like to take this opportunity to advise everyone to avoid making cryptocurrency investments online. If you ​​​​​​have already lost money on forex, cryptocurrency or Ponzi schemes, please contact [email protected] or WhatsApp: +1 (757) 237–1724 at once they can help you get back the crypto you lost to scammers. BEST WISHES. Celina Garcia.

  • 16.09.24 00:10 marcusaustin

    Bitcoin Recovery Services: Restoring Lost Cryptocurrency If you've lost access to your cryptocurrency and unable to make a withdrawal, I highly recommend iBolt Cyber Hacker Bitcoin Recovery Services. Their team is skilled, professional, and efficient in recovering lost Bitcoin. They provide clear communication, maintain high security standards, and work quickly to resolve issues. Facing the stress of lost cryptocurrency, iBolt Cyber Hacker is a trusted service that will help you regain access to your funds securely and reliably. Highly recommended! Email: S u p p o r t @ ibolt cyber hack . com Cont/Whtp + 3. .9 .3. .5..0. .9. 2. 9. .0 .3. 1 .8. Website: h t t p s : / / ibolt cyber hack . com /

  • 16.09.24 00:11 marcusaustin

    Bitcoin Recovery Services: Restoring Lost Cryptocurrency If you've lost access to your cryptocurrency and unable to make a withdrawal, I highly recommend iBolt Cyber Hacker Bitcoin Recovery Services. Their team is skilled, professional, and efficient in recovering lost Bitcoin. They provide clear communication, maintain high security standards, and work quickly to resolve issues. Facing the stress of lost cryptocurrency, iBolt Cyber Hacker is a trusted service that will help you regain access to your funds securely and reliably. Highly recommended! Email: S u p p o r t @ ibolt cyber hack . com Cont/Whtp + 3. .9 .3. .5..0. .9. 2. 9. .0 .3. 1 .8. Website: h t t p s : / / ibolt cyber hack . com /

  • 23.09.24 18:56 matthewshimself

    At first, I was admittedly skeptical about Worldcoin (ref: https://worldcoin.org/blog/worldcoin/this-is-worldcoin-video-explainer-series), particularly around the use of biometric data and the WLD token as a reward mechanism for it. However, after following the project closer, I’ve come to appreciate the broader vision and see the value in the underlying tech behind it. The concept of Proof of Personhood (ref: https://worldcoin.org/blog/worldcoin/proof-of-personhood-what-it-is-why-its-needed) has definitely caught my attention, and does seem like a crucial step towards tackling growing issues like bots, deepfakes, and identity fraud. Sam Altman’s vision is nothing short of ambitious, but I do think he & Alex Blania have the chops to realize it as mainstay in the global economy.

  • 01.10.24 14:54 Sinewclaudia

    I lost about $876k few months ago trading on a fake binary option investment websites. I didn't knew they were fake until I tried to withdraw. Immediately, I realized these guys were fake. I contacted Sinew Claudia world recovery, my friend who has such experience before and was able to recover them, recommended me to contact them. I'm a living testimony of a successful recovery now. You can contact the legitimate recovery company below for help and assistance. [email protected] [email protected] WhatsApp: 6262645164

  • 02.10.24 22:27 Emily Hunter

    Can those who have fallen victim to fraud get their money back? Yes, you might be able to get back what was taken from you if you fell prey to a fraud from an unregulated investing platform or any other scam, but only if you report it to the relevant authorities. With the right plan and supporting documentation, you can get back what you've lost. Most likely, the individuals in control of these unregulated platforms would attempt to convince you that what happened to your money was a sad accident when, in fact, it was a highly skilled heist. You should be aware that there are resources out there to help you if you or someone you know has experienced one of these circumstances. Do a search using (deftrecoup (.) c o m). Do not let the perpetrators of this hoaxes get away with ruining you mentally and financially.

  • 18.10.24 09:34 freidatollerud

    The growth of WIN44 in Brazil is very interesting! If you're looking for more options for online betting and casino games, I recommend checking out Casinos in Brazil. It's a reliable platform that offers a wide variety of games and provides a safe and enjoyable experience for users. It's worth checking out! https://win44.vip

  • 31.10.24 00:13 ytre89

    Can those who have fallen victim to fraud get their money back? Yes, you might be able to get back what was taken from you if you fell prey to a fraud from an unregulated investing platform or any other scam, but only if you report it to the relevant authorities. With the right plan and supporting documentation, you can get back what you've lost. Most likely, the individuals in control of these unregulated platforms would attempt to convince you that what happened to your money was a sad accident when, in fact, it was a highly skilled heist. You should be aware that there are resources out there to help you if you or someone you know has experienced one of these circumstances. Do a search using (deftrecoup (.) c o m). Do not let the perpetrators of this hoaxes get away with ruining you mentally and financially.

  • 02.11.24 14:44 diannamendoza732

    In the world of Bitcoin recovery, Pro Wizard Gilbert truly represents the gold standard. My experience with Gilbert revealed just how exceptional his methods are and why he stands out as the premier authority in this critical field. When I first encountered the complexities of Bitcoin recovery, I was daunted by the technical challenges and potential risks. Gilbert’s approach immediately distinguished itself through its precision and effectiveness. His methods are meticulously designed, combining cutting-edge techniques with an in-depth understanding of the Bitcoin ecosystem. He tackled the recovery process with a level of expertise and thoroughness that was both impressive and reassuring. What sets Gilbert’s methods apart is not just their technical sophistication but also their strategic depth. He conducts a comprehensive analysis of each case, tailoring his approach to address the unique aspects of the situation. This personalized strategy ensures that every recovery effort is optimized for success. Gilbert’s transparent communication throughout the process was invaluable, providing clarity and confidence during each stage of the recovery. The results I achieved with Pro Wizard Gilbert’s methods were remarkable. His gold standard approach not only recovered my Bitcoin but did so with an efficiency and reliability that exceeded my expectations. His deep knowledge, innovative techniques, and unwavering commitment make him the definitive expert in Bitcoin recovery. For anyone seeking a benchmark in Bitcoin recovery solutions, Pro Wizard Gilbert’s methods are the epitome of excellence. His ability to blend technical prowess with strategic insight truly sets him apart in the industry. Call: for help. You may get in touch with them at ; Email: (prowizardgilbertrecovery(@)engineer.com) Telegram ; https://t.me/Pro_Wizard_Gilbert_Recovery Homepage ; https://prowizardgilbertrecovery.info

Для участия в Чате вам необходим бесплатный аккаунт pro-blockchain.com Войти Регистрация
Есть вопросы?
С вами на связи 24/7
Help Icon