Show simple item record

dc.contributor.authorChuphal, Himanshu
dc.contributor.authorDimitrov, Kristiyan
dc.date.accessioned2020-07-06T09:39:08Z
dc.date.available2020-07-06T09:39:08Z
dc.date.issued2020-07-06
dc.identifier.urihttp://hdl.handle.net/2077/65507
dc.descriptionWith the adoption of Deep Learning (DL) systems within the security and safetycritical domains, a variety of traditional testing techniques, novel techniques, and new ideas are increasingly being adopted and implemented within DL testing tools. However, there is currently no benchmark method that can help practitioners to compare the performance of the different DL testing tools. The primary objective of this study is to attempt to construct a benchmarking method to help practitioners in their selection of a DL testing tool. In this paper, we perform an exploratory study on fifteen DL testing tools to construct a benchmarking method and have made one of the first steps towards designing a benchmarking method for DL testing tools. We propose a set of seven tasks using a requirement-scenario-task model, to benchmark DL testing tools. We evaluated four DL testing tools using our benchmarking tool. The results show that the current focus within the field of DL testing is on improving the robustness of the DL systems, however, common performance metrics to evaluate DL testing tools are difficult to establish. Our study suggests that even though there is an increase in DL testing research papers, the field is still in an early phase; it is not sufficiently developed to run a full benchmarking suite. However, the benchmarking tasks defined in the benchmarking method can be helpful to the DL practitioners in selecting a DL testing tool. For future research, we recommend a collaborative effort between the DL testing tool researchers to extend the benchmarking method.sv
dc.description.abstractWith the adoption of Deep Learning (DL) systems within the security and safetycritical domains, a variety of traditional testing techniques, novel techniques, and new ideas are increasingly being adopted and implemented within DL testing tools. However, there is currently no benchmark method that can help practitioners to compare the performance of the different DL testing tools. The primary objective of this study is to attempt to construct a benchmarking method to help practitioners in their selection of a DL testing tool. In this paper, we perform an exploratory study on fifteen DL testing tools to construct a benchmarking method and have made one of the first steps towards designing a benchmarking method for DL testing tools. We propose a set of seven tasks using a requirement-scenario-task model, to benchmark DL testing tools. We evaluated four DL testing tools using our benchmarking tool. The results show that the current focus within the field of DL testing is on improving the robustness of the DL systems, however, common performance metrics to evaluate DL testing tools are difficult to establish. Our study suggests that even though there is an increase in DL testing research papers, the field is still in an early phase; it is not sufficiently developed to run a full benchmarking suite. However, the benchmarking tasks defined in the benchmarking method can be helpful to the DL practitioners in selecting a DL testing tool. For future research, we recommend a collaborative effort between the DL testing tool researchers to extend the benchmarking method.sv
dc.language.isoengsv
dc.subjectDeep Learningsv
dc.subjectDLsv
dc.subjectDL testing toolssv
dc.subjecttestingsv
dc.subjectsoftware engineeringsv
dc.subjectdesignsv
dc.subjectbenchmarksv
dc.subjectmodelsv
dc.subjectdatasetssv
dc.subjecttaskssv
dc.subjecttoolssv
dc.titleBenchmarking Deep Learning Testing Techniques A Methodology and Its Applicationsv
dc.title.alternativeBenchmarking Deep Learning Testing Techniques A Methodology and Its Applicationsv
dc.typetext
dc.setspec.uppsokTechnology
dc.type.uppsokH2
dc.contributor.departmentGöteborgs universitet/Institutionen för data- och informationsteknikswe
dc.contributor.departmentUniversity of Gothenburg/Department of Computer Science and Engineeringeng
dc.type.degreeStudent essay


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record