-
Notifications
You must be signed in to change notification settings - Fork 3
Client Server Portal and Processing Server Communication and Functionality
-
What does this do?
- Pre-processes submissions to ensure only code related files are included in what will be sent to the processing server (.cpp, .java, .c, .hpp, .h)
- Also removes any files related to downloaded packages
- Scrubs identifying data from submissions
- [name=Sammi] Scrubbed copies are then deleted after it is sent to processing server (see Appendix B: Use Case 2)
- Ensures data is organized such that it can be identified after processing occurs (hash?)
- [name=Sammi] Key mapping sheet will be generated after scrubbing occurs
- Packages Submissions into compressed file (zip?)
- Sends submissions to processing server
- Parses results (ie. matches student data with result data)
- Pre-processes submissions to ensure only code related files are included in what will be sent to the processing server (.cpp, .java, .c, .hpp, .h)
-
How does it do it?
-
Preprocessor:
-
For each submission: [name=Lindsey: or For each file within the submission? Should we use a different hash for each file to further protect student data but link all relevant hashes to the same student? (ie. 1:Many )]
-
Create unique hash
-
Link hash with student info on local database
- For each relevant file within submission:
- scrub identifying data and replace with hash
- For each relevant file within submission:
-
-
Package all submissions into a compressed file
-
Send to remote server
-
-
Parsing Results: [name=Lindsey: Assuming this is done here]
- Results stored on Processing Server until requested
- Use the link file created earlier to match results with students
- Format the results and student info in some way to make it easy to be interpreted by the client
-
-
What does this do?
- Accepts assignment sets from universities.
- Checks each submission for similiarities between other submissions and additional materials (Previous Assignments)
- Stores results of similarities for a set period of time, or until they are requested by the submitter.
-
How does it do it
-
Accepting Submissions
- There will be a queue that universities will upload their assignment set to.
- Upon assignment set upload, the professor that submitted the assignment set will receive a unique key that they can use to get the results later.
- [name=Jesse: I'm putting down unique key for now until we discuss it.]
- An assignment set will be taken out of the queue when it can be processed, where the submissions within the set are checked for similarities.
-
Checking Submissions for similarities
- For each submission
- Do some sort of fancy schmancy algorithm to detect similarities (:+1: :balloon:)
- If there is a match, put it in the results set
- For each submission
-
Storing Results
- After an assignment set is processed, the results for the similarities are stored
- The professor that submitted the assignment set can request the results using the unique key they were given.
- If the professor does not request the results within a given time period, the results become expired and are deleted (as per Bockus's requirement of not storing files for a long period of time)
-
Provides endpoints for the Client Portal to call
- This corresponds to what the Client Portal can do
-
-
What does this do?
- Acts as a front end interface for the professor to manage files/classes/results after running the comparisons
- Offers ease of access for user
- Secure access to archives/files/results as the portal is only accessed internally
- This is also where the user authenticates themselves/enters credentials in order to access content
-
How does it do it?
- Implementing a tested GUI experience, surveyed by colleagues?
- Will be accessed/hosted internally on the university's network.
- Username/Password database / 2FA?
-
Client Server ---> Processing Server
- Client Server uploads assignment set to Processing Server
- Need to talk about the format of this (i.e. Zip file? File Structure? etc...)
- Also, talk about metadata included
- Professor, Institution, email for notifications
- Also, talk about metadata included
- Response from the Processing Server is an ETA for the results
- Need to talk about the format of this (i.e. Zip file? File Structure? etc...)
- Client Server requests results from Processing Server
- Either succeeds and results are returned
- Or fails and no results are returned (If no results available yet)
- Possibly send back a time estimate until it's completed
- [name=Jesse: This could possibly be done automatically. Have the client server request the results after the ETA which is returned from the assignment upload. ]
- Client Server uploads assignment set to Processing Server
-
Client Portal ---> Client Server
- Requests Authorization (i.e. login)
- Probably lots of stuff, these two will be closely linked
- Which means development will be closely linked as well
-
Client Server sends:
- user id
- scrubbed data
- tarball (.tar.gz)
- .cpp, .c, .java, .hpp, .h
- 3 folders
- previous submissions
- current submissions
- exclusions
- tarball (.tar.gz)
- authentication token
-
message confirming what was sent
-
Processing Server Responds (to client):
- status (OK, ERROR + Msg)
- job id
- estimated wait time
-
Processing Server has completed processing
-
Processing Server Sends email to professor:
- MSG | ERROR
-
Client Server checks for results:
- user id
- job id
- authentication token
-
Processing Server Responds with results:
if (NOT READY): - Processing Server Responds (to client)
else: - XML as discussed
<?xml version="1.0"?>
<results>
<match>
<number> # </number>
<student>
<hash> student hash </hash>
<file> name </file>
<line_start> # </line_start>
<line_finish> # </line_finish>
</student>
<student>
<hash> student hash </hash>
<file> name </file>
<line_start> # </line_start>
<line_finish> # </line_finish>
</student>
</match>
</results>
- Client Server recieves XML:
- and determines where it is stored and easy opening procedure by client portal