BibTeX | RIS | EndNote | Medlars | ProCite | Reference Manager | RefWorks

Alizadeh Noughabi H, Mohtashami Borzadaran G R. An Updated Review of Goodness of Fit Tests Based on Entropy. JIRSS. 2020; 19 (2) :175-204

URL: http://jirss.irstat.ir/article-1-550-en.html

URL: http://jirss.irstat.ir/article-1-550-en.html

Different approaches to goodness of fit (GOF) testing are proposed. This survey intends to present the developments on Goodness of Fit based on entropy during the last 50 years, from the very first origins until the most recent advances for different data and models. Goodness of fit tests based on Shannon entropy was started by Vasicek in 1976 and were continued by many authors. In this paper, we describe different GOF tests constructed by authors from the beginning to now. First, the problem of GOF and different types of GOF are stated. Then, the method of GOF tests based on entropy for complete and censored data is explained and all works proposed by authors in this subject are mentioned.

Type of Study: Review Article |
Subject:
62Exx: Distribution theory

Received: 2018/10/27 | Accepted: 2021/02/3 | Published: 2020/12/11

Received: 2018/10/27 | Accepted: 2021/02/3 | Published: 2020/12/11

1. Abbasnejad, M. (2016), Modified estimators of Renyi entropy with application in testing exponentiality based on transformed data. ISTATISTIK: Journal of the Turkish Statistical Association, 9, 42-55.

2. Abbasnejad, M., and Mohammadi, D. (2010), A Test for symmetric distributions based on Renyi entropy. Journal of Statistical Science, 4, 21-33.

3. Alizadeh Noughabi, H. (2019), A new estimator of Kullback-Leibler information and its application in goodness of fit tests, Journal of Statistical Computation and Simulation, 89, 1914-1934. [DOI:10.1080/00949655.2019.1602870]

4. Alizadeh Noughabi, H. (2017a), Efficiency of ranked set sampling in tests for normality. Journal of Statistical Computation and Simulation 87, 956-965. [DOI:10.1080/00949655.2016.1238090]

5. Alizadeh Noughabi, H. (2017b), Entropy based tests of uniformity: A Monte Carlo power comparison. Communication in Statistics: Simulation and Computation, 46, 1266-1279. [DOI:10.1080/03610918.2014.999086]

6. Alizadeh Noughabi, H. (2017c), Testing exponentiality based on Kullback-Leibler information for progressively Type II censored data. Communications in Statistics-Simulation and Computation, 46, 7624-7638. [DOI:10.1080/03610918.2016.1248569]

7. Alizadeh Noughabi, H. (2015), Tests of symmetry based on the sample entropy of order statistics and power comparison. Sankhya B: The Indian Journal of Statistics, 77, 240-255. [DOI:10.1007/s13571-015-0103-5]

8. Alizadeh Noughabi, H. (2015a), Entropy estimation using numerical methods. Annals of Data Science, 2, 231-241. [DOI:10.1007/s40745-015-0045-9]

9. Alizadeh Noughabi, H. (2015b), On the estimation of Shannon entropy. Journal of Statistical Research of Iran (JSRI), 12, 57-70. [DOI:10.18869/acadpub.jsri.12.1.57]

10. Alizadeh Noughabi, H. (2010), A new estimator of entropy and its application in testing normality. Journal of Statistical Computation and Simulation, 80, 1151-1162. [DOI:10.1080/00949650903005656]

11. Alizadeh Noughabi, H., and Akbari, M. GH. (2016), Testing normality based on fuzzy data. International Journal of Intelligent Technologies and Applied Statistics, 9, 37-52.

12. Alizadeh Noughabi, H., and Alizadeh Noughabi, R. (2008), Power comparisons of goodness of fit tests based on entropy with other methods. Journal of Statistical Science, 2, 97-113.

13. Alizadeh Noughabi, H., and Arghami, N. R. (2013), A Goodness-of-fit tests based on correcting moments of entropy estimators. Communications in Statistics-Simulation and Computation, 42(3), 499-513. [DOI:10.1080/03610918.2011.634535]

14. Alizadeh Noughabi, H., and Arghami, N. R. (2012), General treatment of goodness-of-fit tests based on Kullback-Leibler information. Journal of Statistical Computation and Simulation, 83, 1556-1569. [DOI:10.1080/00949655.2012.667100]

15. Alizadeh Noughabi, H., and Arghami, N. R. (2011a), Goodness of fit tests based on correcting moments of entropy estimators. Communication in Statistics: Simulation and Computation, 42, 499-513. [DOI:10.1080/03610918.2011.634535]

16. Alizadeh Noughabi, H., and Arghami, N. R. (2011b), Monte Carlo comparison of five exponentiality tests using different entropy estimates. Journal of Statistical Computation and Simulation, 81, 1579-1592. [DOI:10.1080/00949655.2010.496368]

17. Alizadeh Noughabi, H., and Arghami, N. R. (2011c), Monte Carlo comparison of seven normality tests. Journal of Statistical Computation and Simulation, 81, 965-972. [DOI:10.1080/00949650903580047]

18. Alizadeh Noughabi, H., and Arghami, N. R. (2011d), Testing exponentiality using transformed data. Journal of Statistical Computation and Simulation,81, 511-516. [DOI:10.1080/00949650903348171]

19. Alizadeh Noughabi, H., and Arghami, N. R. (2011e), Testing exponentiality based on characterizations of the exponential distribution. Journal of Statistical Computation and Simulation, 81, 1641-1651. [DOI:10.1080/00949655.2010.498373]

20. Alizadeh Noughabi, H., and Arghami, N. R. (2011f), Testing normality using transformed data. Communication in Statistics: Theory and Methods, 42, 3065-3075. [DOI:10.1080/03610926.2011.611604]

21. Alizadeh Noughabi, H., and Arghami, N. R. (2010), A new estimator of entropy. Journal of~the~Iranian~Statistical~Society (JIRSS), 9, 53-64.

22. Alizadeh Noughabi, H., and Balakrishnan, N. (2016), Tests of goodness of fit based on Phi divergence. Journal of Applied Statistics, 43, 412-429. [DOI:10.1080/02664763.2015.1063116]

23. Alizadeh Noughabi, H., and Balakrishnan, N. (2015), Goodness of fit using a new estimate of Kullback-Leibler information based on type II censored data. IEEE Transactions on Reliability, 64, 627-635. [DOI:10.1109/TR.2014.2366763]

24. Alizadeh Noughabi, H. and Chahkandi, M. (2018), Testing the validity of the exponential model for hybrid Type-I censored data. Communications in Statistics-Theory and Methods, 47, 5770-5778. [DOI:10.1080/03610926.2017.1402046]

25. Alizadeh Noughabi, H. and Jarrahiferiz, J. (2018), Moments of nonparametric probability density functions of entropy estimators applied to testing the inverse Gaussian distribution. Journal of Statistical Computation and Simulation, 88, 3217-3229. [DOI:10.1080/00949655.2018.1508464]

26. Alizadeh Noughabi, H., and Park, S. (2016), Tests of fit for the Laplace distribution based on correcting moments of entropy estimators. Journal of Statistical Computation and Simulation, 86, 2165-2181. [DOI:10.1080/00949655.2015.1104685]

27. Alizadeh Noughabi, H., and Vexler, A. (2016), An efficient correction to the density-based empirical likelihood ratio goodness of fit test for the Inverse Gaussian distribution. Journal of Applied Statistics, 43, 2988-3003. [DOI:10.1080/02664763.2016.1156657]

28. Al-Omari, A. I., and Haq, A. (2012), Goodness-of-fit testing for the inverse Gaussian distribution based on new entropy estimation using ranked set sampling and double ranked set sampling. Environmental Systems Research, 1, 1-8. [DOI:10.1186/2193-2697-1-8]

29. Amini, M., Mehdizadeh, M. and Arghami, N. R. (2016), Improved estimator of the entropy and goodness of fit tests in ranked set sampling. arXiv:1106.1733v1 [stat.CO]

30. Anderson, T. W., and Darling, D. A. (1954), A test of goodness of fit. Journal of American Statistical Association, 49, 765-769. [DOI:10.1080/01621459.1954.10501232]

31. Balakrishnan, N., Habibi Rad, A. and Arghami, N. R. (2007), Testing exponentiality based on Kullback-Leibler information with progressively type-II censored data. IEEE Transactions on Reliability, 56, 301-307. [DOI:10.1109/TR.2007.895308]

32. Baratpour S., and Khodadadi F. (2012), A cumulative residual entropy characterization of the Rayleigh distribution and related goodness-of-fit test. Journal of Statistical Research of Iran, 9, 115-131. [DOI:10.18869/acadpub.jsri.9.2.115]

33. Baratpour, S., and Habibi Rad, A. (2012), Testing goodness-of-fit for exponential distribution based on cumulative residual entropy. Communications in Statistics-Theory and Methods, 41, 1387-1396. [DOI:10.1080/03610926.2010.542857]

34. Baratpour, S., and Habibi Rad, A. (2016), Exponentiality test based on the progressive type II censoring via cumulative entropy. Communications in Statistics-Theory and Methods, 45, 2625-2637. [DOI:10.1080/03610918.2014.917673]

35. Beirlant, J., Dudewica, E. J., Gyorfi, L., and Van Der Meulin, E. C. (1997), Nonparametric entropy: an overview. International Journal of Mathematics and Mathematical Sciences, 6, 17-39.

36. Bonett, D. G., and Seier, E. (2002), A test of normality with high uniform power. Computational Statistics and Data Analysis, 40, 435-445. [DOI:10.1016/S0167-9473(02)00074-9]

37. Bowman, A. W. (1992), Density based tests for goodness of fit. Journal of Statistical Computation and Simulation, 40, 1-13. [DOI:10.1080/00949659208811361]

38. Chahkandi, M., and Alizadeh Noughabi, H. (2016), Testing exponentiality of the residual life, based on dynamic cumulative residual entropy. Statistics and Probability Letters, 117, 1-11. [DOI:10.1016/j.spl.2016.05.001]

39. Chen, L., and Shapiro, S. S. (1995), An alternative test for normality based on normalized spacings. Journal of Statistical Computation and Simulation, 53, 269-287. [DOI:10.1080/00949659508811711]

40. Choi, B. (2008), Improvement of goodness-of-fit test for normal distribution based on entropy and power comparison. Journal of Statistical Computation and Simulation, 78, 781-788. [DOI:10.1080/00949650701299451]

41. Choi, B., and Kim, K. (2006), Testing goodness-of-fit for Laplace distribution based on maximum entropy. Statistics, 40, 517-531. [DOI:10.1080/02331880600822473]

42. Choi, B., Kim, K., and Song, S. H. (2004), Goodness-of-fit test for exponentiality based on Kullback-Leibler information. Communication in Statistics: Simulation and Computation, 33, 525-536. [DOI:10.1081/SAC-120037250]

43. Correa, J. C. (1995), A new estimator of entropy. Communications in Statistics-Theory and Methods, 24, 2439-2449. [DOI:10.1080/03610929508831626]

44. D'Agostino R. B., and Pearson E. S. (1973), Tests for departure from normality: Empirical results for the distributions of b2 and $mathrm{sqrt{$b1. Biometrika, 60, 613-622. [DOI:10.2307/2335012]

45. D'Agostino, R. B., and Stephens, M. A. (1986), Goodness-of-Fit Techniques, New York: Marcel Dekker.

46. Dobrushin, R. L., (1958), Simplified method of experimental estimate of entropy of stationary sequence. Theory Probability and its Applications, 3, 462-464. [DOI:10.1137/1103036]

47. Doornik, J. A., and Hansen, H. (1994), An omnibus test or univariate and multivariate normality, Working Paper, Nuffield College, Oxford.

48. Dudewicz, E. J., and van der Meulen, E. C. (1987), Empirical entropy, a new approach to non-parametric entropy estimation, In: Puir, M. L., Vilaplana, J. P. and Wrtz, W. (Eds.), New Perspectives in Theoretical and Applied Statistics. Wiley, New York, 202-207.

49. Dudewicz, E. J., and van der Meulen, E. C. (1981), Entropy-Based Tests of Uniformity. Journal of the American Statistical Association, 76, 967-974. [DOI:10.1080/01621459.1981.10477750]

50. Drissi, N., Chonavel, T., and Boucher, J. M. (2008), Generalized cumulative residual entropy for distributions with unrestricted supports, Research Letters in Signal Processing, [DOI:10.1155/2008/790607]

51. Article ID 790607, 5 pages, 2008. doi:10.1155/2008/790607. [DOI:10.1155/2008/790607]

52. Ebrahimi, N. (2001), Testing for Uniformity of the Residual Life Time Based on Dynamic Kullback-Leibler Information. Annals of the Institute of Statistical Mathematics, 53, 325-337. [DOI:10.1023/A:1012085320762]

53. Ebrahimi, N. (1998), Testing exponentiality of the residual life, based on dynamic Kullback-Leibler information. IEEE Transactions on Reliability, 47, 197-201. [DOI:10.1109/24.722289]

54. Ebrahimi, N., Pflughoeft, K., and Soofi, E. S. (1994), Two measures of sample entropy. Statistics and Probability Letters, 20, 225-234. [DOI:10.1016/0167-7152(94)90046-9]

55. Ebrahimi, N., Soofi., E. S., Habibullah, M. (1992), Testing exponentiality based on Kullback-Leibler information. Journal of the Royal Statistical Society B, 54, 739-748. [DOI:10.1111/j.2517-6161.1992.tb01447.x]

56. Epps, T. W., and Pulley, L. B. (1983), A test for normality based on the empirical characteristic function. Biometrika, 70, 723-726. [DOI:10.1093/biomet/70.3.723]

57. Esteban, M. D., Castellanos, M. E., Morales, D., and Vajda, I. (2001), Monte Carlo comparison of four normality tests using different entropy estimates. Communications in Statistics-Simulation and computation, 30, 761-285. [DOI:10.1081/SAC-100107780]

58. Fernandes, M., and Neri, B. (2010), Nonparametric entropy-based tests of independence between stochastic processes. Econometric Reviews, 29, 276-306. [DOI:10.1080/07474930903451557]

59. Filliben, J. J. (1975), The probability plot correlation coefficient test for normality. Technometrics, 17, 111-117. [DOI:10.1080/00401706.1975.10489279]

60. Gel, Y. R., and Gastwirth, J. L. (2008), A robust modification of the Jarque--Bera test of normality. Economics Letters, 99, 30-32. [DOI:10.1016/j.econlet.2007.05.022]

61. Gokhale, D. V., (1983), On entropy-based goodness-of-fit tests. Computational Statistics and Data Analysis, 1, 157-165. [DOI:10.1016/0167-9473(83)90087-7]

62. Goria, M. N., Leonenko N. N., Mergel V. V., and Novi Inverardi, P. L. (2005), A new class of random vector entropy estimators and its applications in testing statistical hypotheses. Journal of Nonparametric Statistics, 17, 277-297. [DOI:10.1080/104852504200026815]

63. Greenwood, C., and Nikulin, M. S. (1996), A Guide to Chi-Squared Testing, New York: Wiley, ISBN 0-471-55779-X.

64. Gurevich, G., and Vexler, A. (2011), A Two-sample empirical likelihood ratio test based on samples entropy. Statistics and Computing, 21, 657-670. [DOI:10.1007/s11222-010-9199-7]

65. Gurevich, G., and Davidson, A. (2008), Standardized forms of Kulback-Leibler information based statistics for normality and exponentiality. Computer Modelling and New Technologies, 12, 14-25.

66. Habibi Rad, A., and Arghami, N. R. (2007), Test for symmetry of distribution based on the entropy. Journal of Statistical Science, 1, 109-120.

67. Habibi Rad, A., Yousefzadeh, F. and Balakrishnan, N. (2011), Goodness-of-fit test based on Kullback-Leibler information for progressively type-II censored data. IEEE Transactions on Reliability, 60, 570-579. [DOI:10.1109/TR.2011.2162470]

68. Hall, P. and Morton, S.C., (1993), On the estimation of entropy. Annals Institute of Statistical and Mathematics, 45, 69-88. [DOI:10.1007/BF00773669]

69. Jammalamadaka, S. R., and Lund, U. J. (2000), An entropy-based test for goodness of fit of the von Mises distribution (with U. Lund). Journal of Statistical Computation and Simulation, 67, 319-332. [DOI:10.1080/00949650008812048]

70. Jarque, C., and Bera, A. (1980), Efficient tests for normality, homoscedasticity and serial independence of regression residuals. Economics Letters, 6, 255--259. [DOI:10.1016/0165-1765(80)90024-5]

71. Joe, H., (1989), Estimation of entropy and other functionals of a multivariate density. Annals Institute of Statistical and Mathematics, 41, 683-697. [DOI:10.1007/BF00057735]

72. Kolmogorov, A. N. (1933), Sulla Determinazione Empirica di une legge di Distribuzione. Giornale dell'Intituto Italiano degli Attuari, 4, 83-91.

73. Kuiper, N. H. (1960), Tests concerning random points on a circle. Proceedings of the Koninklijke Nederlandse Akademie van Wetenschappen, Series A, 63, 38-47. [DOI:10.1016/S1385-7258(60)50006-0]

74. Lee, S. (2013), A maximum entropy type test of fit: Composite hypothesis case. Computational Statistics and Data Analysis, 57, 59-67. [DOI:10.1016/j.csda.2012.06.006]

75. Lee, S., and Kim, M. (2017), On entropy test for conditionally heteroscedastic location-scale time series models. Entropy, 19, 388-398. [DOI:10.3390/e19080388]

76. Lequesne, J. (2015), A goodness-of-fit test of student distributions based on R'{enyi entropy. AIP Conference Proceedings, 1641, 487-494. [DOI:10.1063/1.4906014]

77. Lilliefors, H. (1967), On the Kolmogorov-Smirnov test for normality with mean and variance unknown. Journal of the American Statistical Association, 62, 399-402. [DOI:10.1080/01621459.1967.10482916]

78. Lim, J., and Park, S. (2007), Censored Kullback-Leibler information and goodness-of-fit test with type II censored data. Journal of Applies Statistics, 34, 1051-1064. [DOI:10.1080/02664760701592000]

79. Mahdizadeh M. (2012), On the use of ranked set samples in entropy based test of fit for the Laplace distribution. Revista Colombiana de Estad'{istica, 35, 443-455.

80. Mahdizadeh, M., and Zamanzade, E. (2017), New goodness of fit tests for the Cauchy distribution. Journal of Applied Statistics, 44, 1106-1121. [DOI:10.1080/02664763.2016.1193726]

81. Mahdizadeh, M., and Arghami, N. R. (2010), Efficiency of ranked set sampling in entropy estimation and goodness-of-fit testing for the inverse Gaussian law. Journal of Statistical Computation and Simulation,80, 761-774. [DOI:10.1080/00949650902773551]

82. Mahdizadeh, M., and Arghami, N. R. (2013), Improved entropy based test of uniformity using ranked set samples, Statistics and Operations Research Transactions. SORT, 37, 3-18.

83. McIntyre, G. A. (1952), A method for unbiased selective sampling using ranked sets, Australian Journal of Agricultural Research. 3,385-390. [DOI:10.1071/AR9520385]

84. Mises, R. von (1931), Wahrscheinlichkeitsrechnung und ihre Anwendung in der Statistik und theoretischen Physik. Leipzig and Vienna: Deuticke.

85. Mudholkar, G. S., and Tian, L. (2002), An entropy characterization of the inverse Gaussian distribution and related goodness-of-fit test. Journal of Statistical Planning and Inference, 102, 211-221. [DOI:10.1016/S0378-3758(01)00099-4]

86. Onken, A., Dragoi, V., and Obermayer, K. (2012), A maximum entropy test for evaluating higher-order correlations in spike counts. PLOS Computational Biology, 8, 1-11. [DOI:10.1371/journal.pcbi.1002539]

87. Park, S., Alizadeh Noughabi, H., and Kim, I. (2018), General cumulative Kullback-Leibler information. Communications in Statistics-Theory and Methods, 47, 1551-1560. [DOI:10.1080/03610926.2017.1321767]

88. Park, S., and Pakyari, R. (2015), Cumulative residual Kullback--Leibler information with the progressively type-II censored data. Statistics and Probability Letters, 106, 287-294. [DOI:10.1016/j.spl.2015.07.029]

89. Park, S. (2005), Testing exponentiality based on the Kullback-Leibler information. IEEE Transactions on Reliability, 54, 22-26. [DOI:10.1109/TR.2004.837314]

90. Park, S. (1999), A goodness-of-fit test for normality based on the sample entropy of order statistics. Statistics and Probability Letters, 44, 359-363. [DOI:10.1016/S0167-7152(99)00027-9]

91. Park, S., and Park, D. (2003), Correcting moments for goodness of fit tests based on two entropy estimates. Journal of Statistical Computation and Simulation, 73, 685-694. [DOI:10.1080/0094965031000070367]

92. Park, S., and Lim, J., (2015), On censored cumulative residual Kullback--Leibler Information and goodness-of-fit test with Type-II censored data. Statistical Papers, 56, 247-256. [DOI:10.1007/s00362-014-0579-5]

93. Park, S., Rao, M., and Shin, D. (2012), On cumulative residual Kullback--Leibler information. Statistics and Probability Letters, 82, 2025-2032. [DOI:10.1016/j.spl.2012.06.015]

94. Pasha, E., Kokabi, M., and Mohtashami, G. R. (2008), Goodness-of-fit tests based on divergence measures. Journal of Applied Mathematics and Informatics, 26, 177-189.

95. Perez-Rodriguez, P., Vaquera-Huerta, H., and Villasenor-Alva, J. (2009), A Goodness-of-Fit Test for the Gumbel Distribution Based on Kullback-Leibler Information. Communications in Statistics: Theory and Methods, 38,842-855. [DOI:10.1080/03610920802316658]

96. Rahman M. M., and Govindarajulu Z. (1997), A modification of the test of Shapiro and Wilk for normality. Journal of Applied Statistics, 24, 219-235. [DOI:10.1080/02664769723828]

97. Rao, M., Chen, Y., Vemuri, B. C., Wang, F. (2004), Cumulative residual entropy: a new measure of information. IEEE Transactions on Information Theory, 50, 1220-1228. [DOI:10.1109/TIT.2004.828057]

98. Senoglu, B., and Surucu, B. (2004), Goodness of fit tests based on Kullback-Leibler information. IEEE Transactions on Reliability, 53,22-26. [DOI:10.1109/TR.2004.833319]

99. Shannon, C. E. (1948), A mathematical theory of communications. Bell System Technical Journal, 27, 379-423; 623-656. [DOI:10.1002/j.1538-7305.1948.tb00917.x]

100. Shannon, C. E. (1949), The mathematical theory of communications. Urbana: University of Illinois Press.

101. Shapiro, S. S., and Francia, R. S. (1972), An approximate analysis of variance test for normality. Journal of the American Statistical Association, 67, 215-216. [DOI:10.1080/01621459.1972.10481232]

102. Shapiro, S. S., and Wilk, M. B. (1965), An analysis of variance test for normality (complete samples). Biometrika, 52, 591-611. [DOI:10.1093/biomet/52.3-4.591]

103. Sharifdoost, M., Nematollahi, N., and Pasha, E. (2009), Goodness-of-Fit test and test of independence by entropy. Journal of Mathematical Extension, 3, 43-59.

104. Song, K. S., (2000), Limit theorems for nonparametric sample entropy estimators. Statistics and Probability Letters, 49, 9-18. [DOI:10.1016/S0167-7152(00)00025-0]

105. Van Es, B. (1992), Estimating functionals related to a density by a class of statistic based on spacings. Scandinavian Journal of Statistics, 19, 61-72.

106. Vasicek, O. (1976), A test for normality based on sample entropy. Journal of the Royal Statistical Society B, 38, 54-59. [DOI:10.1111/j.2517-6161.1976.tb01566.x]

107. Vatutin, V. A. and Michailov, V. G. (1995), Statistical estimation of entropy of discrete random variables with large numbers of results. Russian Mathematical Surveys, 50, 963-976. [DOI:10.1070/RM1995v050n05ABEH002601]

108. Vexler, A., and Gurevich, G. (2010), Empirical likelihood ratios applied to goodness-of-fit tests based on sample entropy. Computational Statistics and Data Analysis, 54, 531-545. [DOI:10.1016/j.csda.2009.09.025]

109. Vexler, A., Tsai, W. M., and Hutson, A. D. (2014), A simple density-based empirical likelihood ratio test for Independence. The American Statistician, 48, 158-169. [DOI:10.1080/00031305.2014.901922]

110. Watson, G. S. (1961), Goodness of fit tests on a circle. Biometrika, 48, 109-114. [DOI:10.1093/biomet/48.1-2.109]

111. Wieczorkowski, R., and Grzegorzewsky, P. (1999), Entropy estimators improvements and comparisons. Communications in Statistics-Simulation, 28, 541-567. [DOI:10.1080/03610919908813564]

112. Yousefzadeh, F., and Arghami, N. R. (2008), Testing Exponentiality Based on Type II Censored Data and a New cdf Estimator. Communications in Statistics-Simulation and Computation, 37, 1479-1499. [DOI:10.1080/03610910802178372]

113. Zamanzade, E., and Arghami, N. R. (2009), Normality and exponentiality tests based on new entropy estimators. Journal of Statistical Science, 2, 179-200.

114. Zamanzade, E., and Arghami, N. R. (2011), Goodness of fit test based on correcting moments of modified entropy estimator. Journal of Statistical Computation and Simulation, 81, 2077-2093. [DOI:10.1080/00949655.2010.517533]

115. Zamanzade, E., and Arghami, N.R. (2012), Testing normality based on new entropy estimators. Journal of Statistical Computation and Simulation,82, 1701-1713. [DOI:10.1080/00949655.2011.592984]

116. Zamanzade, E., and Mahdizadeh M. (2017), Entropy estimation from ranked set samples with application to test of fit. Revista Colombiana de Estad'{istica, 40, 223-241. [DOI:10.15446/rce.v40n2.58944]

117. Zardasht V., Parsi S., and Mousazadeh, M. (2015), On empirical cumulative residual entropy and a goodness-of-fit test for exponentiality. Statistical Papers, 56, 677--688. [DOI:10.1007/s00362-014-0603-9]

118. Zendehdel, J., Rezaei, M., Akbari, M. GH., Zarei, R., and Alizadeh Noughabi, H. (2018), Testing exponentiality for imprecise data and its application. Soft Computing, 22, 3301--3312. [DOI:10.1007/s00500-017-2566-y]

119. Zhang, J. (2002), Powerful goodness-of-fit tests based on the likelihood ratio, Journal of Royal Statistical Society, Series B, 64, 281-294. [DOI:10.1111/1467-9868.00337]

Send email to the article author