tP: Practical Exam Dry Run (PE-D)
PE-D Overview
What: The latest release of the v1.3 period is subjected to a round of peer acceptance/system testing, also called the Practical Exam (PE) Dry Run as this round of testing will be similar to the graded Practical Exam that will be done at v1.4.
When, where: uses a 40 minute slot at the start of week 11 lecture slot (to be done online).
Grading: The PE dry run affects your grade in the following ways.
- If you scored less than half of the marks in the PE, we will consider your performance in PE dry run as well when calculating the PE marks.
- PE dry run is a way for you to practice for the actual PE.
- Taking part in the PE dry run will earn you participation points.
- There is no penalty for bugs reported in your product. Every bug you find is a win-win for you and the team whose product you are testing.
Why:
- To train you to do manual testing, bug reporting, bug assigning of priority ordertriaging, bug fixing, communicating with users/testers/developers, evaluating products etc.
- To help you improve your product before the final submission.
PE-D Preparation
Ensure that you have accepted the invitation to join the GitHub org used by the module. Go to https://github.com/nus-cs2103-AY2021S2 to accept the invitation.
Ensure you have access to a computer that is able to run module projects e.g. has the right Java version.
Download the latest CATcher and ensure you can run it on your computer. You should have done this when you smoke-tested CATcher earlier in the week.
Have a good screen grab tool with annotation features so that you can quickly take a screenshot of a bug, annotate it, and post in the issue tracker.
You can use Ctrl+V to paste a picture from the clipboard into a text box in a bug report.[Optional] Have a good screen recording tool if you plan to use screen recording clips as part of your bug reports. Ensure that your screen recording tool can create small files as CATcher doesn't allow files bigger than 10Mb.
As the CATcher support for uploading screen recordings is new and limited, use it only if strictly necessary -- use screenshots for other cases.Download the product to be tested.
Testing tips
Use easy-to-remember patterns in test data. For example, if you use 12345678
as a phone number while testing and it appears as 2345678
somewhere else in the UI, you can easily spot that the first digit has gone missing. But if you used a random number instead, detecting that bug won't be as easy. Similarly, if you use Alice Bee
, Benny Lee
, Charles Pereira
as test data (note how the names start with letters A, B, C), it will be easy to detect if one goes missing, or they appear in the incorrect order.
Go wide before you go deep. Do a light testing of all features first. That will give you a better idea of which features are likely to be more buggy. Spending equal time for all features or testing in the order the features appear in the UG is not always the best approach.
PE-D During the session
Use the CATcher Web version for reporting bugs. Use the desktop version only if the Web version run into problems.
Use MS Teams (not Zoom) to contact prof if you need help during the session. Use Zoom chat only if you don't get a response via MS Teams.
How many bugs to report?
Report as many bugs as you can find during the given time. Take longer if you need. If you can't find many bugs at this stage when the product is largely untested, you are unlikely to be able to find enough bugs in the better-tested final submission later. In that case, all the more reasons to spend more time and find more bugs now.
Bug reports marked as invalid
by the receiving team later will not count for credit.
The median number of bugs reported in the previous semester's PED was 9. Someone reporting just a 2-3 bugs is usually a sign of a half-hearted attempt rather than lack of bugs to find. If you really can't find bugs, at least submit suggestions for improvements.
PE and PE-D are manual testing sessions. Using test automation tools or scripting is not allowed.
Test the product and report bugs as described below, when the prof informs you to begin testing.
Testing instructions for PE and PE-D
a) Launching the JAR file
- Get the jar file to be tested:
- Put the JAR file in an empty folder in which the app is allowed to create files (i.e., do not use a write-protected folder).
In rare cases, the team could have submitted a ZIP file instead of a JAR file. In that case, unzip that file into the target folder. - Open a command window. Run the
java -version
command to ensure you are using Java 11. - Check the UG to see if there are extra things you need to do before launching the JAR file e.g., download another file from somewhere
You may visit the team's releases page on GitHub if they have provided some extra files you need to download. - Launch the jar file using the
java -jar
command rather than double-clicking (reason: to ensure the jar file is using the same java version that you verified above). Use double-clicking as a last resort.
If you are on Windows, use the DOS prompt or the PowerShell (not the WSL terminal) to run the JAR file. - If the product doesn't work at all: If the product fails catastrophically e.g., cannot even launch, or even the basic commands crash the app, contact the invigilator (via MS Teams, and failing that, via Zoom chat) to receive a fallback team to test.
b) What to test
c) What bugs to report?
d) How to report bugs
- Post bugs as you find them (i.e., do not wait to post all bugs at the end) because bug reports created/modified after the allocated time will not count.
e) Bug report format
- Each bug should be a separate issue i.e., do not report multiple problems in the same bug report.
- Write good quality bug reports; poor quality or incorrect bug reports will not earn credit.
- Use a descriptive title.
- Give a good description of the bug with steps to reproduce, expected, actual, and screenshots. If the receiving team cannot reproduce the bug, you will not be able to get credit for it.
- Assign exactly one
severity.*
label to the bug report. Bug report without a severity label are consideredseverity.Low
(lower severity bugs earn lower credit)
Bug Severity labels:
severity.VeryLow
: A flaw that is purely cosmetic and does not affect usage e.g., a typo/spacing/layout/color/font issues in the docs or the UI that doesn't affect usage. Only cosmetic problems should have this label.severity.Low
: A flaw that is unlikely to affect normal operations of the product. Appears only in very rare situations and causes a minor inconvenience only.severity.Medium
: A flaw that causes occasional inconvenience to some users but they can continue to use the product.severity.High
: A flaw that affects most users and causes major problems for users. i.e., makes the product almost unusable for most users.
When applying for documentation bugs, replace user with reader.
- Assign exactly one
type.*
label to the issue.
Type labels:
type.FunctionalityBug
: A functionality does not work as specified/expected.type.FeatureFlaw
: Some functionality missing from a feature delivered in v1.4 in a way that the feature becomes less useful to the intended target user for normal usage. i.e., the feature is not 'complete'. In other words, an acceptance-testing bug that falls within the scope of v1.4 features. These issues are counted against the product design aspect of the project.
Features that work as specified by the UG but should have been designed to work differently (from the end-user's point of view) fall in this category too.type.DocumentationBug
: A flaw in the documentation e.g., a missing step, a wrong instruction, typos
PE-D After the session
- The relevant bug reports will be transferred to your issue tracker within a day after the session is over. Once you have received the bug reports for your product, you can decide whether you will act on reported issues before the final submission v1.4. For some issues, the correct decision could be to reject or postpone to a version beyond v1.4.
Reminder: There is no penalty for any of the bugs you received in the PE-D. - If you have received stray bug reports (i.e., bug reports that don't seem to be about your project), do let us know ASAP (email the prof).
- You can navigate to the original bug report (via the back-link provided in the bug report given to you) and post in that issue thread to communicate with the tester who reported the bug e.g. to ask for more info, etc. However, the tester is not obliged to respond. Note that simply replying to the bug report in your own repo will not notify the tester.
- Do not argue with the tester to try to convince that person that your way is correct/better. If at all, you can gently explain the rationale for the current behavior but do not waste time getting involved in long arguments. If you think the suggestion/bug is unreasonable, just thank the tester for their view and discontinue to discussion.
If a bug report received is not useful at all (i.e., it looks like the tester submitted some random rubbish to increase the bug count), add theIf you receive 'insincere' bug reports that seem like just an attempt to increase the tester's bug count, please let us know.invalid
label to it (add that label if it doesn't exist in your issue tracker). We will not count such bugs when giving credit to testers.
As for the other bug reports, you can deal with them as you see fit (i.e., triage, apply labels, assign, close etc.).- You may ignore
type/severity.*
labels given by the tester. They will not affect you or the tester either way -- they were there just for the testers to practice. You may apply your own type/severity labels if you wish. - If a bug report is simply a feature suggestion, you can take note of it and close it (to reduce clutter in the issue tracker, and to make it easy for the teaching team to track your progress on dealing with PE-D issues). Similarly, you can close PE-D issues not relevant to v1.3.
Note that listing bugs as 'known bugs' in the UG or specifying unreasonable constraints in the UG to make bugs 'out of scope' will not exempt those bugs from the final grading. That is, PE testers can still earn credit for reporting those bugs and you will still be penalized for them.
However, a product is allowed to have 'known limitations' (e.g., a daily expense tracking application meant for students is unable to handle expenses larger than $999) as long as they don't degrade the product's use within the intended scope. They will not be penalized.
tP: Practical Exam (PE)
PE Overview
The upfront objective of the PE is to increase the rigor of project grading. Assessing most aspects of the project involves an element subjectivity. As the project counts for a large percentage of the final grade, it is not prudent to rely on evaluations of tutors alone as there can be significant variations between how different tutors assess projects. That is why we collect more data points via the PE so as to minimize the chance of your project being affected by evaluator-bias.
PE mainly evaluates your testing skills, done as the following two-parts:
- You will be given a chance to find bugs in a different software. Furthermore, you will be given an opportunity to defend your bug reports against any possible objections. If you can successfully find bugs and defend them against any objections, you earn marks (provided the product actually had bugs in the first place).
- Your product will be subjected to a rigorous testing and you will be given a chance to object to any bugs reported. You will lose marks for any bugs that turned out to be real bugs, but only if your work has more bugs than a certain bar.
The above two can lead to high-rigor, based on how well you achieve the objectives of testing, as opposed to indirect measures such as number of test casesoutcome-based evaluation of your testing skills. The alternative is to rely solely on other easy-to-measure metrics (e.g., the number of test cases, test coverage, test LoC etc.) which we don't think is right, given how important the testing aspect is. The ultimate objective of the PE is not even the higher rigor of grading. Because of the PE, you will realize that any bugs are very likely to be detected, which means you will work extra hard to avoid bugs; and THAT is the real benefit.
Problem: There is no way we can carry out the above-mentioned two-part evaluation at a high-level of rigor if using tutors as testers, or using an automated testing script. e.g., some tutors might not have the motivation to try hard enough to find bugs, and it will be hard to find tutors willing to spend many hours testing products so near to their own exams.
Solution: Get the two parts of the evaluation to feed each other by getting student to test each others' products.The fact that you are testing products created by your classmates and objecting to bugs reported by your classmates can makes this a rather 'unpleasant' experience. You might feel like being pitted against each other, or as if you are forced to bring down each other. But as you read above, it is a necessary evil for this evaluation to be even possible. Given the actual goal is to get you to create products with very few bugs, we think switching off the 'collaborative learning' mode for just a few days is a price worth paying to achieve that goal. After all, the PE is an evaluation activity (not a learning activity) and happens after the regular learning period is over.
You are not taking marks from someone else -- at least, don't think of it that way. The point of contention is 'is this really a bug?' which is independent of the people involved. Furthermore, the reward for detecting a bug and the penalty for having a bug in your code are calculated independently.
Still, none of us likes it when others point out problems of our work. Some of us don't even like pointing out problems of others' work. But we just have to learn not to take bug reports personally. Another important lesson is to learn how to report bugs in a way that doesn't feel like you are attacking or trying to sabotage the dev team.
PE also evaluates aspects other than testing e.g., your product evaluation skills, effort estimation skills etc. When evaluating those aspects in particular, they not graded solely based on peer ratings. Rather, PE data are cross-validated with tutors' grades to identify cases that need further investigation. When peer inputs are used for grading, they are usually combined with tutors' grades with appropriate weight for each. In some cases ratings from team members are given a higher weight compared to ratings from other peers, if that is appropriate.
Grading:
- Your performance in the practical exam will affect your final grade and your peers', as explained in Admin: Project Grading section.
- As such, we have put in measures to identify and penalize insincere/random evaluations.
- Also see:
PE Preparation
- It's similar to,
PE Phase 1: Bug Reporting
- When: Last lecture slot of the semester (Fri, Apr 16th ). Remember to join 15-30 minutes earlier than usual lecture start time. The Zoom link will be given to you closer to the day.
PE Phase 1 will conducted under exam conditions. We will be following the SoC's E-Exam SOP, combined with the deviations/refinements given below. Any non-compliance will be dealt with similar to a non-compliance in the final exam.
- Proctoring will be done via Zoom. No admission if the following requirements are not met.
- You need two Zoom devices (PC: chat, audio
video, Phone: video,audio), unless you have an external web cam for your PC. - Add your
[PE_seat_number]
in front of the first name of your Zoom display name, in your Zoom devices. Seat numbers can be found in here. e.g.,[M48] John Doe
(M18
is the seat number)[M48][PC] John Doe
(for the PC, if using a phone as well)
- Set your camera so that all the following are visible:
- your face (side view, no mask)
- your hands
- the work area (i.e., the table top)
- the computer screen
- You need two Zoom devices (PC: chat, audio
- Join the Zoom waiting room 15-30 minutes before the start time. Admitting you to the Zoom session can take some time.
- In case of Zoom outage, we'll fall back on MS Teams (MST). Make sure you have MST running and have joined the MST Team for the class.
- Recording the screen is not required.
- You are allowed to use head/ear phones.
- Only one screen is allowed. If you want to use the secondary monitor, you should switch off the primary monitor. The screen being used should be fully visible in the Zoom camera view.
- Do not use the public chat channel to ask questions from the prof. If you do, you might accidentally reveal which team you are testing.
- Do not use more than one CATcher instance at the same time. Our grading scripts will red-flag you if you use multiple CATcher instances in parallel.
- Use MS Teams (not Zoom) private messages to communicate with the prof. Zoom sessions are invigilated by tutors, not the prof.
- Do not view video Zoom feeds of others while the testing is ongoing. Keep the video view minimized.
- During the bug reporting periods (i.e., PE Phase 1 - part I and PE Phase 1 - part II), do not use websites/software not in the list given below. In particular, do not visit GitHub. However, you are allowed to visit pages linked in the UG/DG for the purpose of checking if the link is correct. If you need to visit a different website or use another software, please ask for permission first.
- Website: LumiNUS
- Website: Module website (e.g., to look up PE info)
- Software: CATcher, any text editor, any screen grab/recording software
- Software: PDF reader (to read the UG/DG or other references such as the textbook)
- Software: A text editor (to keep track of commands you tried)
- Do not use any other software running in the background e.g., Telegram chat.
- This is a manual testing session. Do not use any test automation tools or custom scripts.
PE Phase 1 - Part I Product Testing [60 minutes]
Bonus marks for high accuracy rates!
You will receive bonus marks if a high percentage (e.g., >70%) of your bugs are accepted as reported (i.e., the eventual type.*
and severity.*
of the bug match the values you chose initially and the bug is accepted by the team).
Test the product and report bugs as described below. You may report both product bugs and documentation bugs during this period.
Testing instructions for PE and PE-D
a) Launching the JAR file
- Get the jar file to be tested:
- Put the JAR file in an empty folder in which the app is allowed to create files (i.e., do not use a write-protected folder).
In rare cases, the team could have submitted a ZIP file instead of a JAR file. In that case, unzip that file into the target folder. - Open a command window. Run the
java -version
command to ensure you are using Java 11. - Check the UG to see if there are extra things you need to do before launching the JAR file e.g., download another file from somewhere
You may visit the team's releases page on GitHub if they have provided some extra files you need to download. - Launch the jar file using the
java -jar
command rather than double-clicking (reason: to ensure the jar file is using the same java version that you verified above). Use double-clicking as a last resort.
If you are on Windows, use the DOS prompt or the PowerShell (not the WSL terminal) to run the JAR file. - If the product doesn't work at all: If the product fails catastrophically e.g., cannot even launch, or even the basic commands crash the app, contact the invigilator (via MS Teams, and failing that, via Zoom chat) to receive a fallback team to test.
b) What to test
c) What bugs to report?
d) How to report bugs
- Post bugs as you find them (i.e., do not wait to post all bugs at the end) because bug reports created/modified after the allocated time will not count.
e) Bug report format
- Each bug should be a separate issue i.e., do not report multiple problems in the same bug report.
- Write good quality bug reports; poor quality or incorrect bug reports will not earn credit.
- Use a descriptive title.
- Give a good description of the bug with steps to reproduce, expected, actual, and screenshots. If the receiving team cannot reproduce the bug, you will not be able to get credit for it.
- Assign exactly one
severity.*
label to the bug report. Bug report without a severity label are consideredseverity.Low
(lower severity bugs earn lower credit)
Bug Severity labels:
severity.VeryLow
: A flaw that is purely cosmetic and does not affect usage e.g., a typo/spacing/layout/color/font issues in the docs or the UI that doesn't affect usage. Only cosmetic problems should have this label.severity.Low
: A flaw that is unlikely to affect normal operations of the product. Appears only in very rare situations and causes a minor inconvenience only.severity.Medium
: A flaw that causes occasional inconvenience to some users but they can continue to use the product.severity.High
: A flaw that affects most users and causes major problems for users. i.e., makes the product almost unusable for most users.
When applying for documentation bugs, replace user with reader.
- Assign exactly one
type.*
label to the issue.
Type labels:
type.FunctionalityBug
: A functionality does not work as specified/expected.type.FeatureFlaw
: Some functionality missing from a feature delivered in v1.4 in a way that the feature becomes less useful to the intended target user for normal usage. i.e., the feature is not 'complete'. In other words, an acceptance-testing bug that falls within the scope of v1.4 features. These issues are counted against the product design aspect of the project.
Features that work as specified by the UG but should have been designed to work differently (from the end-user's point of view) fall in this category too.type.DocumentationBug
: A flaw in the documentation e.g., a missing step, a wrong instruction, typos
PE Phase 1 - Part II Evaluating Documents [30 minutes]
- This slot is for reporting documentation bugs only. You may report bugs related to the UG and the DG.
Only the content of the UG/DG PDF files (not the online version) should be considered. - For each bug reported, cite evidence and justify. For example, if you think the explanation of a feature is too brief, explain what information is missing and why the omission hinders the reader.
PE Phase 1 - Part III Overall Evaluation [15 minutes]
- To be submitted via TEAMMATES. You are recommended to complete this during the PE session itself, but you have until the end of the day to submit (or revise) your submissions.
Important questions included in the evaluation:
Q Quality of the product design,
Evaluate based on the User Guide and the actual product behavior.
Criterion | Unable to judge | Low | Medium | High |
---|---|---|---|---|
target user | Not specified | Clearly specified and narrowed down appropriately | ||
value proposition | Not specified | The value to target user is low. App is not worth using | Some small group of target users might find the app worth using | Most of the target users are likely to find the app worth using |
optimized for target user | Not enough focus for CLI users | Mostly CLI-based, but cumbersome to use most of the time | Feels like a fast typist can be more productive with the app, compared to an equivalent GUI app without a CLI | |
feature-fit | Many of the features don't fit with others | Most features fit together but a few may be possible misfits | All features fit together to for a cohesive whole |
Q Compared to AddressBoook-Level3 (AB3), the overall quality of the UG you evaluated is,
Evaluate based on fit-for-purpose, from the perspective of a target user.
For reference, the AB3 UG is here.
Q Compared to AB3, the overall quality of the DG you evaluated is,
Evaluate based on fit-for-purpose from the perspective of a new team member trying to understand the product's internal design by reading the DG.
For reference, the AB3 DG is here.
Q If the implementation effort required to create AB3 from scratch is 10, the estimated implementation effort of this team is, [0
..20
] e.g., if you give 8
, that means the team's effort is about 80% of that spent on creating AB3. We expect most typical teams to score near to 10
.
- Do read the DG appendix named
Effort
, if any. - Consider implementation work only (i.e., exclude testing, documentation, project management etc.)
- Do not give a high value just to be nice. Your responses will be used to evaluate your effort estimation skills.
Q [Optional] Concerns or any noteworthy observations about the product you evaluated
PE Phase 2: Developer Response
Deadline: Mon, Apr 19th 2359
This phase is for you to respond to the bug reports you received.
Bonus marks for high accuracy rates!
You will receive bonus marks if a high percentage (e.g., >80%) of bugs are accepted as triaged (i.e., the eventual type.*
, severity.*
, and response.*
of the bug match the ones you chose).
Duration: The review period will start around 1 day after the PE and will last for 2-3 days (exact times will be announced later). However, you are recommended to finish this task ASAP, to minimize cutting into your exam preparation work.
Bug reviewing is recommended to be done as a team as some of the decisions need team consensus.
Instructions for Reviewing Bug Reports
- Don't freak out if there are lot of bug reports. Many can be duplicates and some can be false positives. In any case, we anticipate that all of these products will have some bugs and our penalty for bugs is not harsh. Furthermore, it depends on the severity of the bug. Some bug may not even be penalized.
- As mentioned earlier, the penalty for having a specific bug is not the same as the reward for reporting that bug (it's not a zero-sum game). For example, the reward for testers will be higher (because we don't expect the products to have that many bugs after they have gone through so much prior testing)
Penalty for a minor bug (e.g., an indicative value only; the actual value depends on the severity, type, and the number of assignees-0.15) is unlikely to make a difference in your final grade, especially given that the penalty applies only if you have more than a certain amount of bugs.
For example, in a typical case a developer might assigned 5+ severity.VeryLow
bugs before the penalty even starts affecting their marks.
Accordingly, we hope you'll accept bug reports graciously (rather than fight tooth-and-nail to reject every bug report received) if you think the bug is within the ballpark of 'reasonable'. Those minor bugs are really not worth stressing/fighting over.
- If a bug seems to be for a different product (i.e. wrongly assigned to your team), let us know ASAP.
- If the bug is reported multiple times,
- Mark all copies EXCEPT one as duplicates of the one left out (let's call that one the original) using the
A Duplicate of
tick box. - For each group of duplicates, all duplicates should point to one original i.e., no multiple levels of duplicates, and no cyclical duplication relationships.
- If the duplication status is eventually accepted, all duplicates will be assumed to have inherited the
type.*
andseverity.*
from the original.
- Mark all copies EXCEPT one as duplicates of the one left out (let's call that one the original) using the
- Apply one of these labels (if missing, we assign:
response.Accepted
)
Response Labels:
response.Accepted
: You accept it as a bug.response.NotInScope
: It is a valid issue but not something the team should be penalized for e.g., it was not related to features delivered in v1.4.response.Rejected
: What tester treated as a bug is in fact the expected and correct behavior (from the user's point of view), or the tester was mistaken in some other way.response.CannotReproduce
: You are unable to reproduce the behavior reported in the bug after multiple tries.response.IssueUnclear
: The issue description is not clear. Don't post comments asking the tester to give more info. The tester will not be able to see those comments because the bug reports are anonymous.
Only the response.Accepted
bugs are counted against the dev team. While response.NotInScope
are not counted against the dev team, they can earn a small amount of consolation marks for the tester. The other three do not affect marks of either the dev team or the tester, except when calculating bonus marks for accuracy.
- If you disagree with the original bug type assigned to the bug, you may change it to the correct type.
Type labels:
type.FunctionalityBug
: A functionality does not work as specified/expected.type.FeatureFlaw
: Some functionality missing from a feature delivered in v1.4 in a way that the feature becomes less useful to the intended target user for normal usage. i.e., the feature is not 'complete'. In other words, an acceptance-testing bug that falls within the scope of v1.4 features. These issues are counted against the product design aspect of the project.
Features that work as specified by the UG but should have been designed to work differently (from the end-user's point of view) fall in this category too.type.DocumentationBug
: A flaw in the documentation e.g., a missing step, a wrong instruction, typos
- If you disagree with the original severity assigned to the bug, you may change it to the correct level.
Bug Severity labels:
severity.VeryLow
: A flaw that is purely cosmetic and does not affect usage e.g., a typo/spacing/layout/color/font issues in the docs or the UI that doesn't affect usage. Only cosmetic problems should have this label.severity.Low
: A flaw that is unlikely to affect normal operations of the product. Appears only in very rare situations and causes a minor inconvenience only.severity.Medium
: A flaw that causes occasional inconvenience to some users but they can continue to use the product.severity.High
: A flaw that affects most users and causes major problems for users. i.e., makes the product almost unusable for most users.
When applying for documentation bugs, replace user with reader.
- If you need the teaching team's inputs when deciding on a bug (e.g., if you are not sure if the UML notation is correct), post in the forum. Remember to quote the issue number shown in CATcher (it appears at the end of the issue title).
- Decide who should take responsibility for the bug. Use the
Assignees
field to assign the issue to that person(s). There is no need to actually fix the bug though. It's simply an indication/acceptance of responsibility. If there is no assignee, we will distribute the penalty for that bug (if any) equally among all team members e.g., if the penalty is -0.4 and there are 4 members, each member will be penalized -0.1.- If it is not easy to decide the assignee(s), we recommend (but not enforce) that the feature owner should be assigned bugs related to the feature, Reason: The feature owner should have defended the feature against bugs using automated tests and defensive coding techniques.
As far as possible, choose the correct
type.*
,severity.*
,response.*
, assignees, and duplicate status even for bugs you are not accepting. Reason: your non-acceptance may be rejected in a later phase, in which case we need to grade it as an accepted bug.
If a bug's 'duplicate' status was rejected later (i.e., the tester says it is not really a duplicate and the teaching team agrees with the tester), it will inherit the type/severity/assignees from the 'original' bug that it was claimed to be a duplicate of.Justify your response. For all of the following cases, you must add a comment justifying your stance. Testers will get to respond to all those cases and will be double-checked by the teaching team in later phases.
- downgrading severity
- non-acceptance of a bug
- changing the bug type
- non-obvious duplicate
- You can also refer to the below guidelines:
PE Phase 3: Tester Response
Start: Within 1 day after Phase 2 ends.
While you are waiting for Phase 3 to start, comments will be added to the bug reports in your /pe
repo, to indicate the response each received from the receiving team. Please do not edit any of those comments or reply to them via the GitHub interface. Doing so can invalidate them, in which case the grading script will assume that you agree with the dev team's response. Instead, wait till the start of the Phase 3 is announced, after which you should use CATcher to respond.
Deadline: Thu, Apr 22nd 2359
- In this phase you will get to state whether you agree or disagree with the dev team's response to the bugs you reported. If a bug reported has been subjected to any of the below by the receiving dev team, you can record your objections and the reason for the objection.
- not accepted
- severity downgraded
- bug type changed
- bug flagged as duplicate (Note that you still get credit for bugs flagged as duplicates, unless you reported both bugs yourself. Nevertheless, it is in your interest to object to bugs being flagged incorrectly as duplicates because when a bug is reported by more testers, it will be considered an 'obvious' bug and will earn slightly less credit than otherwise)
- If you disagree with the team's decision but would like to revise your own initial type/severity/response as well, you can state that in your explanation e.g., you rated the bug
severity.High
and the team changed it toseverity.Low
but now you think it should beseverity.Medium
. - You can also refer to the below guidelines:
- If you do not respond to a dev response, we'll assume that you agree with it.
- Procedure:
PE Phase 4: Tutor Moderation
- In this phase tutors will look through all dev responses you objected to in the previous phase and decide on a final outcome.
- In the unlikely case we need your inputs, a tutor will contact you.