Sadhana-24 commited on
Commit
8f276cb
·
verified ·
1 Parent(s): 1c93f89

Update README.md

Browse files

# Urban Streetscape Dataset for Vision Language Models

A curated subset of 10,000 street view images with 25 essential features optimized for training vision language models on urban environment analysis tasks.

## Dataset Description

This dataset contains street view imagery paired with comprehensive annotations covering infrastructure characteristics, visual perception metrics, environmental context, and semantic segmentation data.

This comprehensive dataset represents a carefully curated subset of 10,000 high-quality street-view images derived from the NUS Global Streetscapes repository by the Urban Analytics Lab at the National University of Singapore - one of the world's largest and most comprehensive urban perception datasets containing over 10 million street-view images from 688 cities across 210 countries and territories.

This dataset combines the global scale and methodological rigor of the NUS research with focused curation for practical computer vision and urban planning applications. Each sample includes rich multi-modal annotations spanning visual features, human perception ratings, infrastructure metadata, and environmental context.

## Features

### Core Identifiers

- **uuid**: Unique identifier for each image
- **source**: Data source platform (Mapillary/KartaView)
- **image**: Streetview image
- **orig_id**: Original platform-specific identifier
- **lat/lon**: Geographic coordinates

### Environmental Context

- **quality**: Image quality assessment (good, slightly poor, poor)
- **weather**: Weather conditions (clear, cloudy, rainy, snowy)
- **lighting_condition**: Lighting context (day, night, dusk/dawn)
- **platform**: Surface type (driving surface, walking surface, cycling surface)

### Infrastructure Characteristics

- **highway**: Road classification (residential, primary, secondary, tertiary, etc.)
- **road_width**: Road width measurements in meters
- **lanes**: Number of traffic lanes
- **urban_term**: Urban density classification (urban centre, suburban, peri-urban)

### Perception Scores

Human-rated perceptual qualities on a continuous scale:

- **Safe**: Safety perception rating
- **Lively**: Liveliness perception rating
- **Beautiful**: Aesthetic appeal rating
- **Boring**: Monotony perception rating
- **Depressing**: Negative affect rating
- **Wealthy**: Socioeconomic perception rating

### Visual Composition

Pixel-level semantic segmentation percentages and computed indices:

- **Road**: Road surface coverage percentage
- **Building**: Built structure coverage percentage
- **Vegetation**: Natural vegetation coverage percentage
- **green_view_index**: Quantitative measure of visible greenery (0-1)
- **sky_view_index**: Quantitative measure of visible sky (0-1)
- **Car**: Vehicle presence percentage

### Data Quality

- **Feature completeness**: 95%+ coverage for visual indices and perception scores
- **Geographic diversity**: Global representation across 170+ countries
- **Infrastructure coverage**: 85%+ coverage for road classification and lane data
- **Road width data**: 60%+ coverage with precise measurements

## Applications

The dataset supports training and evaluation of models for:

- **Multi-modal Learning**: Train models to understand relationships between visual features and human perception
- **Urban Scene Understanding**: Develop AI systems for automated urban environment classification
- **Cross-cultural Generalization**: Build models that work across diverse global urban contexts
- **Semantic Segmentation**: Advanced training data for urban scene parsing and object detection
- **Quantitative urban environment assessment**
- **Infrastructure analysis and measurement**
- **Visual appeal and safety evaluation**
- **Urban planning and policy research**
- **Multi-modal understanding of streetscapes**

## Sampling Methodology

The dataset employs a three-tier sampling strategy:

- **Tier 1: High completeness samples (50%)**: Prioritizes samples with comprehensive feature coverage
- **Tier 2: Visual diversity samples (30%)**: Ensures representation across different urban environment types
- **Tier 3: Geographic coverage (20%)**: Maintains global representativeness

## References

This dataset is derived from the NUS Global Streetscapes repository, developed by the Urban Analytics Lab at the National University of Singapore. For additional information and resources, refer to the following:

- **Hugging Face Dataset**: [NUS-UAL/global-streetscapes](https://huggingface.co/datasets/NUS-UAL/global-streetscapes)
- **GitHub Repository**: [ualsg/global-streetscapes](https://github.com/ualsg/global-streetscapes)
- **Project Documentation**: [Urban Analytics Lab - Global Streetscapes](https://ual.sg/project/global-streetscapes/)

Files changed (1) hide show
  1. README.md +22 -1
README.md CHANGED
@@ -64,4 +64,25 @@ configs:
64
  data_files:
65
  - split: train
66
  path: data/train-*
67
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
64
  data_files:
65
  - split: train
66
  path: data/train-*
67
+ license: mit
68
+ task_categories:
69
+ - image-classification
70
+ - visual-question-answering
71
+ language:
72
+ - en
73
+ tags:
74
+ - urban-perception
75
+ - urban-planning
76
+ - street-view
77
+ - computer-vision
78
+ - mapillary
79
+ - green-view-index
80
+ - sky-view-index
81
+ - street-view-assessment
82
+ - infrastructure-assessment
83
+ pretty_name: >-
84
+ Urban Perception Dataset: Street-View Image Analysis Dataset for
85
+ Infrastructure Assessment
86
+ size_categories:
87
+ - 1K<n<10K
88
+ ---