There is no standard formula for enough testing at least from Software testing prespective and I believe it applies for any other testing methadology. It is always a subject of the code or product quality itself, given time and deadlines, whether it is new product or upgrade of existing one...etc. But there are some tips that might be taken in consideration: - Before receiving the testing product make sure to set a checklist or test cases/plan that covers different possiblities and failure input criteria. - Review previous faults on similar products and make sure your current test cover those cases - If many faults were found during the initial testing (the planned testing time) this means the product is fauly and definitely it needs more testing. If the found faults were few then the product is somehow good and once the tester confident of the main functionalities are good with additional out of the box testing it should gain the clients statisfaction. - Always have a second eye. Perfoming the same testing by the same person all the time leads to something called Pesticide Paradox. Having another tester review will result a more solid product eventhough the second test is just sanity and fast one. - Make sure to have A-Z test for the product before the final delivery. - Always add buffer time to your estimation.
You will be assessed mainly on your approach and not the final output. If you can explain what, why and how about any approach you would take towards a programming solution then that should be sufficient.