Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Persist the total completed downloads of a torrent #559

Open
Tracked by #671
josecelano opened this issue Apr 11, 2024 · 2 comments
Open
Tracked by #671

Persist the total completed downloads of a torrent #559

josecelano opened this issue Apr 11, 2024 · 2 comments
Labels
- User - Enjoyable to Use our Software Easy Good for Newcomers Enhancement / Feature Request Something New good first issue Good for newcomers
Milestone

Comments

@josecelano
Copy link
Member

josecelano commented Apr 11, 2024

Relates to: torrust/torrust-index-gui#521

We import this info from the tracker:

pub struct TorrentBasicInfo {
    pub info_hash: String,
    pub seeders: i64,
    pub completed: i64,
    pub leechers: i64,
}

But we only store seeders and leechers in the database:

CREATE TABLE "torrust_torrent_tracker_stats" (
	"torrent_id"	INTEGER NOT NULL,
	"tracker_url"	VARCHAR(256) NOT NULL,
	"seeders"	INTEGER NOT NULL DEFAULT 0,
	"leechers"	INTEGER NOT NULL DEFAULT 0,
	"updated_at"	TEXT DEFAULT 1000-01-01 00:00:00,
	FOREIGN KEY("torrent_id") REFERENCES "torrust_torrents"("torrent_id") ON DELETE CASCADE,
	UNIQUE("torrent_id","tracker_url"),
	PRIMARY KEY("torrent_id")
);

We could also persist the completed field to show that info on the frontend.

This is the function that imports and stores that data.

    /// Import torrents statistics not updated recently..
    ///
    /// # Errors
    ///
    /// Will return an error if the database query failed.
    pub async fn import_torrents_statistics_not_updated_since(
        &self,
        datetime: DateTime<Utc>,
        limit: i64,
    ) -> Result<(), database::Error> {
        debug!(target: LOG_TARGET, "Importing torrents statistics not updated since {} limited to a maximum of {} torrents ...", datetime.to_string().yellow(), limit.to_string().yellow());

        let torrents = self
            .database
            .get_torrents_with_stats_not_updated_since(datetime, limit)
            .await?;

        if torrents.is_empty() {
            return Ok(());
        }

        info!(target: LOG_TARGET, "Importing {} torrents statistics from tracker {} ...", torrents.len().to_string().yellow(), self.tracker_url.yellow());

        // Import stats for all torrents in one request

        let info_hashes: Vec<String> = torrents.iter().map(|t| t.info_hash.clone()).collect();

        let torrent_info_vec = match self.tracker_service.get_torrents_info(&info_hashes).await {
            Ok(torrents_info) => torrents_info,
            Err(err) => {
                let message = format!("Error getting torrents tracker stats. Error: {err:?}");
                error!(target: LOG_TARGET, "{}", message);
                // todo: return a service error that can be a tracker API error or a database error.
                return Ok(());
            }
        };

        // Update stats for all torrents

        for torrent in torrents {
            match torrent_info_vec.iter().find(|t| t.info_hash == torrent.info_hash) {
                None => {
                    // No stats for this torrent in the tracker
                    drop(
                        self.database
                            .update_tracker_info(torrent.torrent_id, &self.tracker_url, 0, 0)
                            .await,
                    );
                }
                Some(torrent_info) => {
                    // Update torrent stats for this tracker
                    drop(
                        self.database
                            .update_tracker_info(
                                torrent.torrent_id,
                                &self.tracker_url,
                                torrent_info.seeders,
                                torrent_info.leechers,
                            )
                            .await,
                    );
                }
            }
        }

        Ok(())
    }

Subtasks

  • Add the new field to the database and store the value in the statistics importer.
  • Add the new field to the API endpoints: torrent list and details.
@josecelano josecelano added Enhancement / Feature Request Something New - User - Enjoyable to Use our Software labels Apr 11, 2024
@josecelano josecelano added this to the v3.1.0 milestone Apr 11, 2024
@josecelano josecelano added Easy Good for Newcomers good first issue Good for newcomers labels Apr 11, 2024
@hungfnt
Copy link

hungfnt commented May 11, 2024

Hi @josecelano. Do you generate the files in migration folder, or do you manually edit them?

@josecelano
Copy link
Member Author

Hi @josecelano. Do you generate the files in migration folder, or do you manually edit them?

Hi @ngthhu , you can do it manually if you want. What I usually do is:

  • Run sqlx-cli to generate the new migration. For example: sqlx migrate add torrust_add_field_xxx. We use the prefix: torrust_
  • That will generate the file in the dir migrations.
  • I copy that file twice into migrations\mysql and migrations\sqlite3

You can create the files with the datetime prefix.

After adding the migration, it will be executed automatically the next time you run the application.

Docs: https://docs.rs/torrust-index/3.0.0-alpha.2/torrust_index/#development

We are not using reversible migrations yet. I don't know why. The project was not using them when I started working on it. Maybe we can open a discussion to decide if we should support it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
- User - Enjoyable to Use our Software Easy Good for Newcomers Enhancement / Feature Request Something New good first issue Good for newcomers
Projects
Status: No status
Development

No branches or pull requests

3 participants