nn_automl_model / README.md
Iris314's picture
Update README.md
69ef03e verified
metadata
language:
  - en
tags:
  - automl
  - image-classification
  - autogluon
  - cmu-course
datasets:
  - keerthikoganti/lipstick-image-dataset
metrics:
  - type: accuracy
  - type: f1
model-index:
  - name: Lipstick Detection (Neural Network AutoML)
    results:
      - task:
          type: image-classification
          name: Binary Image Classification
        dataset:
          name: keerthikoganti/lipstick-image-dataset
          type: classification
          split: augmented
        metrics:
          - type: accuracy
            value: 1
          - type: f1
            value: 1
      - task:
          type: image-classification
          name: Binary Image Classification
        dataset:
          name: keerthikoganti/lipstick-image-dataset
          type: classification
          split: original
        metrics:
          - type: accuracy
            value: 0.93
          - type: f1
            value: 0.93

Model Card for Lipstick Detection (Neural Network AutoML)

This model performs binary classification of images into lipstick (1) vs. no lipstick (0).
It was trained with AutoGluon Multimodal AutoML, which automatically explored different neural network backbones (ResNet18, ResNet34, EfficientNet-B0) under a fixed budget with early stopping.
The best-performing backbone selected was EfficientNet-B0.


Model Details

Model Description

  • Developed by: Xinxuan Tang (CMU)
  • Dataset curated by: Keerthi Koganti (CMU)
  • Model type: AutoML neural network (best = EfficientNet-B0)
  • Language(s): N/A (image dataset)
  • Finetuned from: timm/efficientnet_b0 pretrained weights

Model Sources


Uses

Direct Use

  • Educational practice in binary image classification.
  • Experimenting with AutoML search over neural architectures.

Downstream Use

  • Could be adapted for teaching transfer learning workflows.

Out-of-Scope Use

  • Not suitable for real-world cosmetics applications.
  • Not for deployment in automated decision-making or safety-critical contexts.

Bias, Risks, and Limitations

  • Small dataset: limited original images, heavy reliance on synthetic augmentation.
  • Domain bias: images are from a single source/product and background setup.
  • Synthetic augmentation: does not capture real-world variation in lighting, product types, or diversity of appearances.

Recommendations

Use primarily for teaching and demonstration purposes.
Do not generalize conclusions beyond this dataset.


How to Get Started with the Model

from autogluon.multimodal import MultiModalPredictor
import pandas as pd

# Load trained predictor
predictor = MultiModalPredictor.load("autogluon_efficientnet_b0/")

# Run inference on a new image
test_data = pd.DataFrame([{"image": "example.jpg"}])
print(predictor.predict(test_data))