Complete JFrog Artifactory to CloudRepo Migration Guide: Save 90%+ on Repository Costs
Step-by-step guide to migrate from JFrog Artifactory to CloudRepo. Escape complex licensing, eliminate egress fees, and reduce costs by 90%+ with our comprehensive migration playbook.
Migrating from JFrog Artifactory to CloudRepo represents one of the most impactful cost optimizations your organization can make. Teams regularly save $138,000+ annually while escaping JFrog’s complex licensing model and surprise egress fees. This comprehensive guide provides everything you need for a successful migration.
Why Teams Are Escaping JFrog Artifactory
The migration from JFrog Artifactory to CloudRepo isn’t just about cost savings—it’s about escaping a pricing model that punishes growth and creates budget uncertainty.
The Hidden Costs of JFrog Artifactory
JFrog’s pricing model includes multiple cost vectors that compound quickly:
- $15,000+ minimum annual commitment before you store a single artifact
- Egress fees that can double your bill - charged per GB downloaded
- User-based pricing that increases with team growth
- Storage overage charges when you exceed limits
- Mandatory support contracts at 20% of license cost
- Infrastructure costs for self-hosted deployments
Real Customer Example: A 50-person engineering team paid JFrog $156,000 annually:
- Base license: $75,000
- Egress fees: $48,000
- Storage overages: $18,000
- Support contract: $15,000
After migrating to CloudRepo, their annual cost dropped to $17,988 - a $138,212 annual saving (88.5% reduction).
CloudRepo’s Transparent Pricing Model
CloudRepo eliminates pricing complexity with a simple, predictable model:
- Storage-based pricing only - no per-user fees
- No egress fees (with reasonable soft limits per plan)
- No minimum commitments - pay for what you use
- Support included in all plans
- Transparent calculator - know your costs upfront
Cost Analysis: Calculate Your Savings
Let’s break down the typical savings when migrating from JFrog Artifactory to CloudRepo:
JFrog Artifactory Annual Costs (50-person team, 5TB storage, 50TB egress)
Component | Annual Cost |
---|---|
Pro X License (50 users) | $75,000 |
Egress fees (50TB × $0.08/GB) | $48,000 |
Storage beyond included | $18,000 |
Support contract (20%) | $15,000 |
Total Annual Cost | $156,000 |
CloudRepo Annual Costs (same usage)
Component | Annual Cost |
---|---|
Business Plan (10TB storage) | $1,499/month |
Additional storage | Included |
Egress fees | $0 (no metering) |
Support | Included |
Total Annual Cost | $17,988 |
Annual Savings: $138,212 (88.5% reduction)
This represents:
- 11.5 additional engineer salaries
- 276 months of AWS infrastructure
- Funding for multiple development tools
Pre-Migration Assessment
Before beginning your migration, conduct a thorough assessment of your Artifactory environment.
1. Audit Current Artifactory Usage
# Get repository statistics
curl -u admin:password \
"https://artifactory.example.com/artifactory/api/storageinfo" \
| jq '.repositoriesSummaryList[]'
# List all repositories
curl -u admin:password \
"https://artifactory.example.com/artifactory/api/repositories" \
| jq '.[].key'
# Check user count and permissions
curl -u admin:password \
"https://artifactory.example.com/artifactory/api/security/users" \
| jq '. | length'
# Analyze bandwidth usage (requires log analysis)
grep "Download" /var/log/artifactory/access.log | \
awk '{sum+=$10} END {print sum/1024/1024/1024 " GB"}'
2. Document Repository Structure
Create a comprehensive inventory of your repositories:
# repository-inventory.yaml
repositories:
maven:
- name: libs-release
type: local
package_type: maven
size: 850GB
artifacts: 125000
- name: libs-snapshot
type: local
package_type: maven
size: 420GB
artifacts: 89000
cleanup_policy: 30_days
- name: maven-central
type: remote
url: https://repo1.maven.org/maven2
cache_size: 200GB
docker: - name: docker-local
type: local
package_type: docker
size: 1.2TB
images: 3400
npm: - name: npm-local
type: local
package_type: npm
size: 180GB
packages: 45000
3. Export Critical Configurations
# Export repository configurations
for repo in $(curl -s -u admin:password \
"https://artifactory.example.com/artifactory/api/repositories" | \
jq -r '.[].key'); do
curl -u admin:password \
"https://artifactory.example.com/artifactory/api/repositories/$repo" \
> "configs/$repo.json"
done
# Export users and permissions
curl -u admin:password \
"https://artifactory.example.com/artifactory/api/security/users" \
> users.json
curl -u admin:password \
"https://artifactory.example.com/artifactory/api/security/permissions" \
> permissions.json
# Export virtual repository configurations
curl -u admin:password \
"https://artifactory.example.com/artifactory/api/repositories?type=virtual" \
> virtual-repos.json
4. Analyze Integration Points
Document all systems integrated with Artifactory:
- CI/CD Pipelines: Jenkins, GitHub Actions, GitLab CI
- Build Tools: Maven, Gradle, npm, Docker
- Deployment Systems: Kubernetes, Terraform, Ansible
- Monitoring: Datadog, New Relic, Prometheus
- Security Scanners: Snyk, SonarQube, Twistlock
Step-by-Step Migration Process
Step 1: Create CloudRepo Account and Initial Setup
- Sign up for CloudRepo at https://cloudrepo.io/signup
- Choose appropriate plan based on your storage needs
- Configure organization settings
# Set CloudRepo credentials as environment variables
export CLOUDREPO_USER="your-username"
export CLOUDREPO_PASSWORD="your-password"
export CLOUDREPO_URL="https://your-org.cloudrepo.io"
# Verify connection
curl -u $CLOUDREPO_USER:$CLOUDREPO_PASSWORD \
"$CLOUDREPO_URL/api/v1/account/info"
Step 2: Map Artifactory Repositories to CloudRepo
Create equivalent repositories in CloudRepo for each Artifactory repository:
Repository Type Mapping
Artifactory Type | CloudRepo Equivalent | Notes |
---|---|---|
Local Repository | Repository | Direct mapping |
Remote Repository | Proxy Repository | Caching proxy for external repos |
Virtual Repository | Repository Group | Aggregates multiple repositories |
Generic Repository | Raw Repository | For arbitrary file storage |
# create_cloudrepo_repos.py
import requests
import json
# Read Artifactory repository configurations
with open('repository-inventory.yaml', 'r') as f:
inventory = yaml.safe_load(f)
cloudrepo_api = "https://your-org.cloudrepo.io/api/v1"
auth = ("username", "password")
for repo_type, repos in inventory['repositories'].items():
for repo in repos: # Create CloudRepo repository
payload = {
"name": repo['name'],
"type": repo_type,
"description": f"Migrated from Artifactory {repo['name']}"
}
response = requests.post(
f"{cloudrepo_api}/repositories",
json=payload,
auth=auth
)
if response.status_code == 201:
print(f"Created repository: {repo['name']}")
else:
print(f"Failed to create {repo['name']}: {response.text}")
Step 3: Export Artifacts from Artifactory
Use JFrog CLI for efficient bulk export:
#!/bin/bash
# export_from_artifactory.sh
# Install JFrog CLI if not present
if ! command -v jf &> /dev/null; then
curl -fL https://getcli.jfrog.io/v2 | sh
chmod +x jf
sudo mv jf /usr/local/bin/
fi
# Configure JFrog CLI
jf config add artifactory-server \
--artifactory-url="https://artifactory.example.com/artifactory" \
--user="admin" \
--password="password"
# Export repositories
REPOS=(
"libs-release"
"libs-snapshot"
"docker-local"
"npm-local"
)
for repo in "${REPOS[@]}"; do
echo "Exporting $repo..."
# Create export directory
mkdir -p exports/$repo
# Download all artifacts from repository
jf rt download \
"$repo/*" \
"exports/$repo/" \
--flat=false \
--threads=10 \
--retry=3
# Generate manifest
find exports/$repo -type f | \
sed "s|exports/$repo/||" > exports/$repo.manifest
echo "Exported $(wc -l < exports/$repo.manifest) artifacts from $repo"
done
Step 4: Import Artifacts to CloudRepo
Efficiently upload artifacts to CloudRepo:
#!/usr/bin/env python3
# import_to_cloudrepo.py
import os
import requests
from concurrent.futures import ThreadPoolExecutor, as_completed
from pathlib import Path
import hashlib
CLOUDREPO_URL = "https://your-org.cloudrepo.io"
AUTH = ("username", "password")
MAX_WORKERS = 10
def calculate_checksum(file_path):
"""Calculate SHA256 checksum of file."""
sha256_hash = hashlib.sha256()
with open(file_path, "rb") as f:
for byte_block in iter(lambda: f.read(4096), b""):
sha256_hash.update(byte_block)
return sha256_hash.hexdigest()
def upload_artifact(local_path, repo_name, artifact_path):
"""Upload single artifact to CloudRepo."""
url = f"{CLOUDREPO_URL}/repository/{repo_name}/{artifact_path}"
# Calculate checksum
checksum = calculate_checksum(local_path)
# Upload file
with open(local_path, 'rb') as f:
headers = {
'X-Checksum-SHA256': checksum
}
response = requests.put(
url,
data=f,
auth=AUTH,
headers=headers
)
if response.status_code == 201:
return f"✓ {artifact_path}"
else:
return f"✗ {artifact_path}: {response.status_code}"
def migrate_repository(repo_name, export_dir):
"""Migrate entire repository to CloudRepo."""
export_path = Path(export_dir)
artifacts = list(export_path.rglob('\*'))
artifacts = [a for a in artifacts if a.is_file()]
print(f"Migrating {len(artifacts)} artifacts from {repo_name}")
results = []
with ThreadPoolExecutor(max_workers=MAX_WORKERS) as executor:
futures = {}
for artifact_file in artifacts:
relative_path = artifact_file.relative_to(export_path)
future = executor.submit(
upload_artifact,
str(artifact_file),
repo_name,
str(relative_path)
)
futures[future] = str(relative_path)
for future in as_completed(futures):
result = future.result()
print(result)
results.append(result)
successful = len([r for r in results if r.startswith('✓')])
print(f"Migration complete: {successful}/{len(artifacts)} artifacts uploaded")
# Migrate all repositories
repositories = [
("libs-release", "exports/libs-release"),
("libs-snapshot", "exports/libs-snapshot"),
("docker-local", "exports/docker-local"),
("npm-local", "exports/npm-local"),
]
for repo_name, export_dir in repositories:
migrate_repository(repo_name, export_dir)
Step 5: Update Build Configurations
Maven Configuration
Update settings.xml
to point to CloudRepo:
<!-- ~/.m2/settings.xml -->
<settings>
<servers>
<server>
<id>cloudrepo-releases</id>
<username>${env.CLOUDREPO_USER}</username>
<password>${env.CLOUDREPO_PASSWORD}</password>
</server>
<server>
<id>cloudrepo-snapshots</id>
<username>${env.CLOUDREPO_USER}</username>
<password>${env.CLOUDREPO_PASSWORD}</password>
</server>
</servers>
<mirrors>
<mirror>
<id>cloudrepo-central</id>
<mirrorOf>central</mirrorOf>
<url>https://your-org.cloudrepo.io/repository/maven-central/</url>
</mirror>
</mirrors>
<profiles>
<profile>
<id>cloudrepo</id>
<activation>
<activeByDefault>true</activeByDefault>
</activation>
<repositories>
<repository>
<id>cloudrepo-releases</id>
<url>https://your-org.cloudrepo.io/repository/libs-release/</url>
<releases>
<enabled>true</enabled>
</releases>
<snapshots>
<enabled>false</enabled>
</snapshots>
</repository>
<repository>
<id>cloudrepo-snapshots</id>
<url>https://your-org.cloudrepo.io/repository/libs-snapshot/</url>
<releases>
<enabled>false</enabled>
</releases>
<snapshots>
<enabled>true</enabled>
</snapshots>
</repository>
</repositories>
</profile>
</profiles>
</settings>
Gradle Configuration
Update gradle.properties
and build scripts:
// gradle.properties
cloudrepoUrl=https://your-org.cloudrepo.io
cloudrepoUsername=your-username
cloudrepoPassword=your-password
// build.gradle
repositories {
maven {
url "${cloudrepoUrl}/repository/libs-release/"
credentials {
username cloudrepoUsername
password cloudrepoPassword
}
}
maven {
url "${cloudrepoUrl}/repository/libs-snapshot/"
credentials {
username cloudrepoUsername
password cloudrepoPassword
}
}
}
publishing {
repositories {
maven {
name = 'cloudrepo'
url = uri("${cloudrepoUrl}/repository/libs-${version.endsWith('SNAPSHOT') ? 'snapshot' : 'release'}/")
credentials {
username = cloudrepoUsername
password = cloudrepoPassword
}
}
}
}
Step 6: Migrate Virtual Repositories
Create Repository Groups in CloudRepo to replace Artifactory Virtual Repositories:
# migrate_virtual_repos.py
import json
import requests
# Load virtual repository configurations
with open('virtual-repos.json', 'r') as f:
virtual_repos = json.load(f)
cloudrepo_api = "https://your-org.cloudrepo.io/api/v1"
auth = ("username", "password")
for vrepo in virtual_repos: # Create repository group in CloudRepo
group_config = {
"name": vrepo['key'],
"type": "group",
"repositories": vrepo['repositories'],
"description": f"Migrated virtual repository from Artifactory"
}
response = requests.post(
f"{cloudrepo_api}/repository-groups",
json=group_config,
auth=auth
)
if response.status_code == 201:
print(f"Created repository group: {vrepo['key']}")
print(f" Members: {', '.join(vrepo['repositories'])}")
Step 7: Configure Authentication and Permissions
Migrate users and set up appropriate permissions:
# migrate_users.py
import json
import requests
import random
import string
def generate_password():
"""Generate secure random password."""
return ''.join(random.choices(
string.ascii_letters + string.digits + string.punctuation,
k=16
))
# Load Artifactory users
with open('users.json', 'r') as f:
artifactory_users = json.load(f)
cloudrepo_api = "https://your-org.cloudrepo.io/api/v1"
auth = ("admin", "admin_password")
user_mappings = []
for user in artifactory_users: # Skip system users
if user['name'] in ['anonymous', '_internal']:
continue
# Generate new password for CloudRepo
new_password = generate_password()
# Create user in CloudRepo
user_data = {
"username": user['name'],
"email": user['email'],
"password": new_password,
"roles": ["developer"] # Map Artifactory roles to CloudRepo roles
}
response = requests.post(
f"{cloudrepo_api}/users",
json=user_data,
auth=auth
)
if response.status_code == 201:
user_mappings.append({
"username": user['name'],
"email": user['email'],
"password": new_password
})
print(f"Created user: {user['name']}")
# Save user mappings for distribution
with open('cloudrepo_users.json', 'w') as f:
json.dump(user_mappings, f, indent=2)
print(f"\nMigrated {len(user_mappings)} users")
print("User credentials saved to cloudrepo_users.json")
Step 8: Update CI/CD Pipelines
Jenkins Migration
Update Jenkins to use CloudRepo:
// Jenkinsfile
pipeline {
agent any
environment {
CLOUDREPO_CREDS = credentials('cloudrepo-credentials')
CLOUDREPO_URL = 'https://your-org.cloudrepo.io'
}
stages {
stage('Build') {
steps {
sh '''
# Configure Maven settings
echo "<settings>
<servers>
<server>
<id>cloudrepo</id>
<username>${CLOUDREPO_CREDS_USR}</username>
<password>${CLOUDREPO_CREDS_PSW}</password>
</server>
</servers>
</settings>" > settings.xml
# Build with CloudRepo repositories
mvn -s settings.xml clean package
'''
}
}
stage('Deploy') {
steps {
sh '''
mvn -s settings.xml deploy \
-DaltDeploymentRepository=cloudrepo::default::${CLOUDREPO_URL}/repository/libs-release/
'''
}
}
}
}
GitHub Actions Migration
# .github/workflows/build.yml
name: Build and Deploy
on:
push:
branches: [main]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Setup Java
uses: actions/setup-java@v3
with:
java-version: '11'
distribution: 'adopt'
- name: Configure Maven for CloudRepo
run: |
mkdir -p ~/.m2
cat > ~/.m2/settings.xml <<EOF
<settings>
<servers>
<server>
<id>cloudrepo</id>
<username>${{ secrets.CLOUDREPO_USERNAME }}</username>
<password>${{ secrets.CLOUDREPO_PASSWORD }}</password>
</server>
</servers>
</settings>
EOF
- name: Build and Deploy
run: |
mvn clean deploy \
-DaltDeploymentRepository=cloudrepo::default::https://your-org.cloudrepo.io/repository/libs-release/
env:
CLOUDREPO_USERNAME: ${{ secrets.CLOUDREPO_USERNAME }}
CLOUDREPO_PASSWORD: ${{ secrets.CLOUDREPO_PASSWORD }}
GitLab CI Migration
# .gitlab-ci.yml
variables:
CLOUDREPO_URL: "https://your-org.cloudrepo.io"
MAVEN_OPTS: "-Dmaven.repo.local=$CI_PROJECT_DIR/.m2/repository"
cache:
paths: - .m2/repository/
before_script:
- |
cat > ~/.m2/settings.xml <<EOF
<settings>
<servers>
<server>
<id>cloudrepo</id>
<username>${CLOUDREPO_USERNAME}</username>
<password>${CLOUDREPO_PASSWORD}</password>
</server>
</servers>
</settings>
EOF
build:
stage: build
script: - mvn clean compile
test:
stage: test
script: - mvn test
deploy:
stage: deploy
script: - |
mvn deploy \
-DaltDeploymentRepository=cloudrepo::default::${CLOUDREPO_URL}/repository/libs-release/
only: - main
Step 9: Test and Validate
Comprehensive testing checklist:
#!/bin/bash
# validate_migration.sh
echo "CloudRepo Migration Validation"
echo "=============================="
# Test authentication
echo -n "Testing authentication... "
if curl -s -u $CLOUDREPO_USER:$CLOUDREPO_PASSWORD \
"$CLOUDREPO_URL/api/v1/account/info" > /dev/null; then
echo "✓"
else
echo "✗"
exit 1
fi
# Test artifact download
echo -n "Testing artifact download... "
if curl -s -u $CLOUDREPO_USER:$CLOUDREPO_PASSWORD \
"$CLOUDREPO_URL/repository/libs-release/com/example/app/1.0/app-1.0.jar" \
-o /tmp/test-artifact.jar; then
echo "✓"
else
echo "✗"
fi
# Test artifact upload
echo -n "Testing artifact upload... "
echo "test" > /tmp/test-file.txt
if curl -s -u $CLOUDREPO_USER:$CLOUDREPO_PASSWORD \
-X PUT \
-T /tmp/test-file.txt \
"$CLOUDREPO_URL/repository/libs-release/test/migration/test.txt"; then
echo "✓"
else
echo "✗"
fi
# Test build tool integration
echo -n "Testing Maven build... "
cd sample-project
if mvn clean compile > /dev/null 2>&1; then
echo "✓"
else
echo "✗"
fi
# Test Docker registry
echo -n "Testing Docker registry... "
if docker login your-org.cloudrepo.io \
-u $CLOUDREPO_USER \
-p $CLOUDREPO_PASSWORD > /dev/null 2>&1; then
echo "✓"
else
echo "✗"
fi
# Compare artifact counts
echo -e "\nArtifact Count Comparison:"
echo "-------------------------"
for repo in libs-release libs-snapshot npm-local docker-local; do
artifactory_count=$(grep -c "$repo" exports/$repo.manifest 2>/dev/null || echo 0)
cloudrepo_count=$(curl -s -u $CLOUDREPO_USER:$CLOUDREPO_PASSWORD \
"$CLOUDREPO_URL/api/v1/repositories/$repo/statistics" | \
jq '.artifactCount' 2>/dev/null || echo 0)
if [ "$artifactory_count" -eq "$cloudrepo_count" ]; then
status="✓"
else
status="✗"
fi
printf "%-20s Artifactory: %6d CloudRepo: %6d %s\n" \
"$repo" "$artifactory_count" "$cloudrepo_count" "$status"
done
echo -e "\nValidation complete!"
Step 10: Decommission Artifactory
Once validation is complete, safely decommission Artifactory:
#!/bin/bash
# decommission_artifactory.sh
# Create final backup
echo "Creating final Artifactory backup..."
curl -X POST -u admin:password \
"https://artifactory.example.com/artifactory/api/system/backup" \
-H "Content-Type: application/json" \
-d '{
"key": "final-backup-$(date +%Y%m%d)",
"includeBuilds": true,
"includeConfig": true
}'
# Export audit logs
echo "Exporting audit logs..."
tar -czf artifactory-logs-$(date +%Y%m%d).tar.gz \
/var/log/artifactory/
# Update DNS/Load Balancer (if using CNAME)
echo "Update DNS to point to CloudRepo:"
echo " artifactory.example.com → your-org.cloudrepo.io"
# Notify teams
echo "Sending migration completion notification..."
curl -X POST https://hooks.slack.com/services/YOUR/SLACK/WEBHOOK \
-H 'Content-Type: application/json' \
-d '{
"text": "Artifactory migration complete! New repository URL: https://your-org.cloudrepo.io",
"attachments": [{
"color": "good",
"fields": [
{"title": "Annual Savings", "value": "$138,212", "short": true},
{"title": "Cost Reduction", "value": "88.5%", "short": true}
]
}]
}'
# Schedule Artifactory shutdown
echo "Artifactory will be shut down in 30 days"
echo "Final backup location: /backups/artifactory/final-backup-$(date +%Y%m%d)"
Repository Type Migration Guide
Docker Registry Migration
Migrating Docker images requires special handling:
#!/bin/bash
# migrate_docker_images.sh
# Login to both registries
docker login artifactory.example.com
docker login your-org.cloudrepo.io
# Get list of all images
IMAGES=$(curl -s -u admin:password \
"https://artifactory.example.com/artifactory/api/docker/docker-local/v2/_catalog" | \
jq -r '.repositories[]')
for image in $IMAGES; do
# Get all tags for image
TAGS=$(curl -s -u admin:password \
"https://artifactory.example.com/artifactory/api/docker/docker-local/v2/$image/tags/list" | \
jq -r '.tags[]')
for tag in $TAGS; do
echo "Migrating $image:$tag"
# Pull from Artifactory
docker pull artifactory.example.com/docker-local/$image:$tag
# Tag for CloudRepo
docker tag \
artifactory.example.com/docker-local/$image:$tag \
your-org.cloudrepo.io/$image:$tag
# Push to CloudRepo
docker push your-org.cloudrepo.io/$image:$tag
# Clean up local images
docker rmi artifactory.example.com/docker-local/$image:$tag
docker rmi your-org.cloudrepo.io/$image:$tag
done
done
NPM/PyPI Repository Migration
#!/bin/bash
# migrate_npm_packages.sh
# Configure npm for CloudRepo
npm config set registry https://your-org.cloudrepo.io/repository/npm-group/
npm config set always-auth true
npm config set \_auth $(echo -n "$CLOUDREPO_USER:$CLOUDREPO_PASSWORD" | base64)
# Export NPM packages from Artifactory
cd exports/npm-local
for package in $(find . -name "\*.tgz"); do
echo "Publishing $package to CloudRepo"
npm publish $package --registry https://your-org.cloudrepo.io/repository/npm-local/
done
# Python/PyPI migration
pip config set global.index-url https://$CLOUDREPO_USER:$CLOUDREPO_PASSWORD@your-org.cloudrepo.io/repository/pypi-group/simple
# Upload Python packages
for package in $(find exports/pypi-local -name "_.whl" -o -name "_.tar.gz"); do
twine upload \
--repository-url https://your-org.cloudrepo.io/repository/pypi-local/ \
-u $CLOUDREPO_USER \
-p $CLOUDREPO_PASSWORD \
$package
done
Feature Comparison
Understanding what changes between Artifactory and CloudRepo:
What You’ll Gain
Feature | CloudRepo Advantage |
---|---|
Cost | 90%+ reduction in annual costs |
Pricing Model | Simple, transparent, no hidden fees |
Egress Fees | None (reasonable soft limits) |
Setup Time | 5 minutes vs days/weeks |
Maintenance | Zero - fully managed |
Support | Included in all plans |
Performance | Global CDN vs single server |
Availability | 99.9% SLA guaranteed |
Security | SOC2 compliant, encrypted at rest |
Feature Parity
Feature | Artifactory | CloudRepo | Notes |
---|---|---|---|
Maven/Gradle | ✓ | ✓ | Full support |
NPM | ✓ | ✓ | Full support |
Docker | ✓ | ✓ | Full support |
PyPI | ✓ | ✓ | Full support |
NuGet | ✓ | ✓ | Full support |
REST API | ✓ | ✓ | Different endpoints |
Virtual Repos | ✓ | ✓ | Called Repository Groups |
Access Tokens | ✓ | ✓ | Full support |
LDAP/SAML | ✓ | ✓ | Enterprise plan |
Webhooks | ✓ | ✓ | Full support |
Replication | ✓ | Limited | Geo-replication coming |
Xray Integration | ✓ | ✗ | Use alternative scanners |
Workarounds for Missing Features
JFrog Xray Alternative: Integrate with Snyk, Sonartype Nexus IQ, or GitHub Security Scanning
# Example: Snyk integration in CI/CD - name: Security Scan run: | snyk test
--all-projects snyk monitor --all-projects
Advanced Replication: Use CloudRepo’s API for custom replication:
# custom_replication.py
import requests
import schedule
import time
def replicate_artifacts():
"""Replicate artifacts between CloudRepo instances."""
source_api = "https://primary.cloudrepo.io/api/v1"
target_api = "https://backup.cloudrepo.io/api/v1"
# Get list of recent artifacts
response = requests.get(
f"{source_api}/repositories/libs-release/artifacts",
params={"since": "24h"},
auth=("user", "pass")
)
for artifact in response.json():
# Download from source
artifact_data = requests.get(
artifact['downloadUrl'],
auth=("user", "pass")
)
# Upload to target
requests.put(
f"{target_api}/repositories/libs-release/{artifact['path']}",
data=artifact_data.content,
auth=("user", "pass")
)
# Schedule replication every hour
schedule.every().hour.do(replicate_artifacts)
while True:
schedule.run_pending()
time.sleep(60)
Common Migration Challenges and Solutions
Challenge 1: Large Docker Images
Problem: Docker images over 5GB failing to migrate
Solution: Use chunked upload with retry logic:
#!/bin/bash
# migrate_large_docker_images.sh
migrate_large_image() {
local image=$1
local max_retries=5
local retry_count=0
while [ $retry_count -lt $max_retries ]; do
echo "Attempt $((retry_count + 1)) for $image"
if timeout 3600 docker push your-org.cloudrepo.io/$image; then
echo "Successfully pushed $image"
return 0
fi
retry_count=$((retry_count + 1))
echo "Retry $retry_count for $image"
sleep 30
done
echo "Failed to push $image after $max_retries attempts"
return 1
}
# Process large images
for image in $(docker images --format "{{.Repository}}:{{.Tag}}" | grep -E "GB$"); do
migrate_large_image $image
done
Challenge 2: Maven Snapshots with Timestamps
Problem: Artifactory snapshot timestamps differ from CloudRepo format
Solution: Update Maven configuration to handle both formats:
<!-- pom.xml -->
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-deploy-plugin</artifactId>
<version>3.0.0</version>
<configuration>
<uniqueVersion>false</uniqueVersion>
</configuration>
</plugin>
</plugins>
</build>
Challenge 3: Permission Model Differences
Problem: Artifactory’s fine-grained permissions don’t map 1:1 to CloudRepo
Solution: Implement role-based access control mapping:
# map_permissions.py
permission_mapping = {
"artifactory-admin": "cloudrepo-admin",
"artifactory-deployer": "cloudrepo-developer",
"artifactory-reader": "cloudrepo-reader",
"artifactory-anonymous": None # No anonymous access in CloudRepo
}
def map_user_permissions(artifactory_user):
"""Map Artifactory permissions to CloudRepo roles."""
cloudrepo_roles = []
for perm in artifactory_user['permissions']:
if perm['repository'] == '*':
cloudrepo_roles.append('admin')
elif 'write' in perm['actions']:
cloudrepo_roles.append('developer')
elif 'read' in perm['actions']:
cloudrepo_roles.append('reader')
return list(set(cloudrepo_roles)) # Remove duplicates
Challenge 4: Build Cache Dependencies
Problem: Builds fail due to missing cached dependencies
Solution: Pre-populate CloudRepo cache:
#!/bin/bash
# populate_cache.sh
# Force download of all dependencies
mvn dependency:go-offline
# For Gradle
gradle build --refresh-dependencies
# For NPM
npm install --force
# For Python
pip download -r requirements.txt -d /tmp/pip-cache
for package in /tmp/pip-cache/\*.whl; do
twine upload $package
done
Success Stories
Case Study 1: FinTech Startup Saves $142,000 Annually
“We were spending $158,000/year with JFrog for 60 developers. After migrating to CloudRepo, our annual cost dropped to $15,988. The migration took one weekend, and we haven’t looked back.” - DevOps Lead, Series B FinTech
Migration Stats:
- Migration Duration: 3 days
- Artifacts Migrated: 2.8 million
- Annual Savings: $142,012 (89.9%)
- Performance Improvement: 40% faster downloads
Case Study 2: E-commerce Platform Escapes Egress Trap
“JFrog’s egress fees were killing us - $6,000/month just for our CI/CD pipeline downloading artifacts. CloudRepo eliminated this entirely.” - Platform Engineer, Top 100 E-commerce
Migration Stats:
- Previous Egress Costs: $72,000/year
- Current Egress Costs: $0
- Total Annual Savings: $195,000
- ROI: 2 weeks
Case Study 3: Gaming Studio Simplifies Infrastructure
“We had two full-time engineers managing Artifactory. Now with CloudRepo, we have zero infrastructure overhead and those engineers work on actual products.” - CTO, Mobile Gaming Studio
Migration Stats:
- Infrastructure Eliminated: 12 servers
- Engineers Freed Up: 2 FTEs
- Annual Savings: $230,000 (including salaries)
- Uptime Improvement: 99.3% → 99.9%
Post-Migration Optimization
Performance Tuning
Optimize CloudRepo for maximum performance:
# cloudrepo-optimization.yaml
optimizations:
caching:
- Enable aggressive caching for release artifacts
- Set 30-day cache for snapshots
- Use CDN endpoints for read-heavy repositories
networking: - Configure region-specific endpoints - Enable HTTP/2 for better multiplexing - Use connection pooling in CI/CD
artifacts: - Enable compression for text artifacts - Use shallow cloning for large repositories - Implement artifact cleanup policies
Cost Monitoring
Track and optimize CloudRepo costs:
# monitor_costs.py
import requests
from datetime import datetime, timedelta
def analyze_usage():
"""Analyze CloudRepo usage and costs."""
api_url = "https://your-org.cloudrepo.io/api/v1"
auth = ("admin", "password")
# Get storage metrics
storage = requests.get(
f"{api_url}/account/storage",
auth=auth
).json()
# Get bandwidth metrics
bandwidth = requests.get(
f"{api_url}/account/bandwidth",
params={"days": 30},
auth=auth
).json()
# Calculate costs
storage_gb = storage['used'] / 1024 / 1024 / 1024
monthly_cost = calculate_monthly_cost(storage_gb)
print(f"Storage Used: {storage_gb:.2f} GB")
print(f"Monthly Bandwidth: {bandwidth['total']:.2f} GB")
print(f"Estimated Monthly Cost: ${monthly_cost:.2f}")
print(f"vs. JFrog Artifactory: ${13000:.2f}")
print(f"Monthly Savings: ${13000 - monthly_cost:.2f}")
analyze_usage()
Security Best Practices
Implement security measures in CloudRepo:
#!/bin/bash
# security_setup.sh
# Enable 2FA for all admin users
curl -X POST \
"$CLOUDREPO_URL/api/v1/security/2fa/enforce" \
-u admin:password
# Configure IP whitelisting
curl -X POST \
"$CLOUDREPO_URL/api/v1/security/ip-whitelist" \
-u admin:password \
-d '{
"ranges": [
"10.0.0.0/8",
"172.16.0.0/12",
"203.0.113.0/24"
]
}'
# Enable audit logging
curl -X POST \
"$CLOUDREPO_URL/api/v1/security/audit" \
-u admin:password \
-d '{"enabled": true, "retention_days": 90}'
# Set up vulnerability scanning webhook
curl -X POST \
"$CLOUDREPO_URL/api/v1/webhooks" \
-u admin:password \
-d '{
"url": "https://security-scanner.example.com/scan",
"events": ["artifact.uploaded"],
"active": true
}'
Risk Mitigation Strategies
Parallel Run Strategy
Run both systems in parallel during transition:
# parallel_deploy.py
import requests
import concurrent.futures
def deploy_to_both(artifact_path, artifact_data):
"""Deploy artifact to both Artifactory and CloudRepo."""
results = {}
with concurrent.futures.ThreadPoolExecutor() as executor:
# Deploy to Artifactory
artifactory_future = executor.submit(
deploy_to_artifactory,
artifact_path,
artifact_data
)
# Deploy to CloudRepo
cloudrepo_future = executor.submit(
deploy_to_cloudrepo,
artifact_path,
artifact_data
)
results['artifactory'] = artifactory_future.result()
results['cloudrepo'] = cloudrepo_future.result()
return results
def deploy_to_artifactory(path, data):
return requests.put(
f"https://artifactory.example.com/artifactory/libs-release/{path}",
data=data,
auth=("user", "pass")
)
def deploy_to_cloudrepo(path, data):
return requests.put(
f"https://your-org.cloudrepo.io/repository/libs-release/{path}",
data=data,
auth=("user", "pass")
)
Rollback Procedures
Prepare for potential rollback:
#!/bin/bash
# rollback_plan.sh
# Keep Artifactory in read-only mode
curl -X POST -u admin:password \
"https://artifactory.example.com/artifactory/api/system/configuration" \
-H "Content-Type: application/xml" \
-d '<config><security><anonAccessEnabled>false</anonAccessEnabled></security></config>'
# Sync recent changes back to Artifactory if needed
sync_to_artifactory() {
# Get artifacts created in last 24 hours
recent_artifacts=$(curl -s \
"$CLOUDREPO_URL/api/v1/artifacts?since=24h" \
-u $CLOUDREPO_USER:$CLOUDREPO_PASSWORD)
# Sync back to Artifactory
for artifact in $recent_artifacts; do
curl -X PUT \
"https://artifactory.example.com/artifactory/$artifact" \
-u admin:password \
-T "$artifact"
done
}
# Quick rollback DNS
rollback_dns() {
echo "nameserver artifactory.example.com" > /etc/resolv.conf
systemctl restart networking
}
Data Validation
Ensure data integrity throughout migration:
# validate_migration.py
import hashlib
import requests
from pathlib import Path
def validate_artifacts(manifest_file):
"""Validate all artifacts were migrated correctly."""
validation_results = []
with open(manifest_file, 'r') as f:
artifacts = f.readlines()
for artifact_path in artifacts:
artifact_path = artifact_path.strip()
# Get checksum from Artifactory
artifactory_checksum = get_artifactory_checksum(artifact_path)
# Get checksum from CloudRepo
cloudrepo_checksum = get_cloudrepo_checksum(artifact_path)
# Compare
if artifactory_checksum == cloudrepo_checksum:
status = "✓"
else:
status = "✗"
validation_results.append({
"path": artifact_path,
"status": status,
"artifactory_sha": artifactory_checksum,
"cloudrepo_sha": cloudrepo_checksum
})
# Generate report
success_count = sum(1 for r in validation_results if r['status'] == "✓")
total_count = len(validation_results)
print(f"Validation Results: {success_count}/{total_count} artifacts match")
# Save detailed report
with open('validation_report.json', 'w') as f:
json.dump(validation_results, f, indent=2)
return success_count == total_count
def get_artifactory_checksum(artifact_path):
response = requests.head(
f"https://artifactory.example.com/artifactory/libs-release/{artifact_path}",
auth=("user", "pass")
)
return response.headers.get('X-Checksum-SHA256', '')
def get_cloudrepo_checksum(artifact_path):
response = requests.head(
f"https://your-org.cloudrepo.io/repository/libs-release/{artifact_path}",
auth=("user", "pass")
)
return response.headers.get('X-Checksum-SHA256', '')
# Run validation
if validate_artifacts('migration_manifest.txt'):
print("✓ Migration validation successful!")
else:
print("✗ Validation failed - check validation_report.json")
Conclusion
Migrating from JFrog Artifactory to CloudRepo is one of the most impactful decisions you can make for your organization’s bottom line and developer productivity. With potential savings of $138,000+ annually and a dramatically simplified operational model, the ROI is immediate and substantial.
Key Takeaways: - 90%+ cost reduction is typical, not exceptional - Migration can be completed in days, not weeks - Zero infrastructure overhead means your team focuses on shipping code - No egress fees eliminates budget uncertainty - Support included in all plans - no expensive contracts
The migration process, while requiring careful planning, is straightforward and well-supported. CloudRepo’s team is available to assist with migration planning and execution, ensuring a smooth transition.
Next Steps
- Calculate Your Savings: Use our pricing calculator to see your exact savings
- Start Free Trial: Experience CloudRepo with a 30-day free trial
- Get Migration Support: Contact our team for migration assistance
- Download Migration Toolkit: Access scripts and tools from our GitHub repository
Ready to escape JFrog’s complex pricing and reduce your repository costs by 90%? Start your free trial today and join thousands of teams who’ve made the switch to CloudRepo.
Have questions about migrating from JFrog Artifactory? Our migration specialists are standing by to help. Contact us for personalized migration support.
Ready to save 90% on your repository hosting?
Join thousands of teams who've switched to CloudRepo for better pricing and features.