You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The Titanic classification problem involves predicting whether a passenger on the Titanic survived or not, based on various features available about each passenger. The sinking of the Titanic in 1912 is one of the most infamous maritime disasters in history, and this dataset has been widely used as a benchmark for predictive modeling.
This project focuses on predicting the prices of clothes based on various features such as category, size, and color. Leveraging the power of machine learning, specifically supervised learning algorithms, we aim to build a robust predictive model capable of estimating prices with high accuracy.
This repository serves as a comprehensive resource for understanding and implementing various feature selection techniques, gaining familiarity with Jupyter Notebook, and mastering the process of model training and evaluation
Predicting heart failure using Decision Tree algorithm with a dataset sourced from Kaggle. Achieved 99% accuracy, demonstrating robust performance as a binary classifier.
This project detects if the card holder will default on the credit payment on the following month or not by implementation of various ML Classification Algorithms in a modular coding format
Welcome to the Fraud Detection Project! This repository uses machine learning 🧠 to detect fraudulent transactions 💳. It includes data preprocessing 🛠️, model training 📚, evaluation 📊, and visualization 📈. Explore, experiment, and contribute 🤝 to improve fraud detection accuracy. Check the README for setup and usage instructions.
This project successfully employed advanced machine learning protocols, producing highly accurate predictions and providing valuable insights into the impact of health factors on exam performance.
One notebook trains a vegetable classification model with InceptionV3 using TensorFlow and Keras. The second notebook showcases the pre-trained model's inference on vegetable categories, loading InceptionV3 and enhancing image features. Together, they offer a compact solution for vegetable classification through deep learning.
威智慧眼(VisionWiz)是一款开源的AI工具软件,专为人工智能模型的高效开发和优化而设计。它集成了一整套功能,涵盖了模型训练、数据采集、数据标注以及模型测试等多个关键环节。 VisionWiz is an open-source AI tool designed for the efficient development and optimization of AI models. It includes a comprehensive set of features that cover several key processes, such as model training, data collection, data labeling, and model testing.
This project utilizes a machine learning model where consumer brand data is employed. Initially, a preliminary model is developed, followed by a refined model using a process called 'fine-tuning' to improve results. Additionally, a comprehensive testing suite has been created to validate accuracy and reliability of the model's predictions.
This project aims to predict the prices of cars based on various features such as year of manufacture, brand, mileage, and other relevant factors. Leveraging machine learning algorithms, this project explores different regression techniques to create an accurate model for car price prediction.
Welcome to the Machine Learning Repository! This repository is a collection of notebooks showcasing various machine learning projects and implementations. It incluedes Decision tree algorithm, Random forest , Support vector machine etc.
Repository for predicting house prices using the Ames Housing dataset. Implements advanced regression techniques with TensorFlow Decision Forests, including Random Forests. The project covers data exploration, feature engineering, model training, evaluation, and visualization.
Alphabet Soup Charity: A deep learning model to predict the success of charitable donations, enhancing decision-making for fund allocation and impact optimization.
AlmaBetter Capstone Project -Classification model to predict the sentiment of COVID-19 tweets. The tweets have been pulled from Twitter and manual tagging has been done then.