• [开源]ZXING的.NET版本源码解析


    [概述]

    ZXing ("zebra crossing") is an open-source, multi-format 1D/2D barcode image processing library implemented in Java, with ports to other languages.

    开源地址:

    https://github.com/zxing/zxing

    [工程结构]

    以ZXing.Net.Source.0.14.0.0版本为例,此文件目录下对应两个工程:

    Base和WinMD,我们主要分析Base工程,其中:

    ZXing.Net.Source.0.14.0.0BaseSourcelib目录下的工程为源码工程,zxing.vs2012为源码工程Solution文件;

    ZXing.Net.Source.0.14.0.0BaseClientsWindowsFormsDemo目录下的工程为ZXING输出类库的应用工程,WindowsFormsDemo为应用工程Solution文件。

    [应用工程分析]

    WindowsFormsDemo有三个Tab,分别为Decoder/Encoder/WebCam,分别实现图片读码/二维码生成/网络摄像头采样读码(主要调用了avicap32.dll,它是Windows API应用程序接口相关模块,用于对摄像头和其它视频硬件进行AⅥ电影和视频的截取,详见工程文件WebCam.cs)。

    Decoder(图片读码):

    private void btnStartDecoding_Click(object sender, EventArgs e)
          {
             var fileName = txtBarcodeImageFile.Text;
             if (!File.Exists(fileName))
             {
                MessageBox.Show(this, String.Format("File not found: {0}", fileName), "Error", MessageBoxButtons.OK,
                                MessageBoxIcon.Error);
                return;
             }
    
             using (var bitmap = (Bitmap)Bitmap.FromFile(fileName))
             {
                if (TryOnlyMultipleQRCodes)
                   Decode(bitmap, TryMultipleBarcodes, new List<BarcodeFormat> { BarcodeFormat.QR_CODE });
                else
                   Decode(bitmap, TryMultipleBarcodes, null);
             }
          }
    
          private void Decode(Bitmap image, bool tryMultipleBarcodes, IList<BarcodeFormat> possibleFormats)
          {
             resultPoints.Clear();
             lastResults.Clear();
             txtContent.Text = String.Empty;
    
             var timerStart = DateTime.Now.Ticks;
             Result[] results = null;
             barcodeReader.Options.PossibleFormats = possibleFormats;
             if (tryMultipleBarcodes)
                results = barcodeReader.DecodeMultiple(image);
             else
             {
                var result = barcodeReader.Decode(image);
                if (result != null)
                {
                   results = new[] {result};
                }
             }
             var timerStop = DateTime.Now.Ticks;
    
             if (results == null)
             {
                txtContent.Text = "No barcode recognized";
             }
             labDuration.Text = new TimeSpan(timerStop - timerStart).Milliseconds.ToString("0 ms");
    
             if (results != null)
             {
                foreach (var result in results)
                {
                   if (result.ResultPoints.Length > 0)
                   {
                      var rect = new Rectangle((int) result.ResultPoints[0].X, (int) result.ResultPoints[0].Y, 1, 1);
                      foreach (var point in result.ResultPoints)
                      {
                         if (point.X < rect.Left)
                            rect = new Rectangle((int) point.X, rect.Y, rect.Width + rect.X - (int) point.X, rect.Height);
                         if (point.X > rect.Right)
                            rect = new Rectangle(rect.X, rect.Y, rect.Width + (int) point.X - rect.X, rect.Height);
                         if (point.Y < rect.Top)
                            rect = new Rectangle(rect.X, (int) point.Y, rect.Width, rect.Height + rect.Y - (int) point.Y);
                         if (point.Y > rect.Bottom)
                            rect = new Rectangle(rect.X, rect.Y, rect.Width, rect.Height + (int) point.Y - rect.Y);
                      }
                      using (var g = picBarcode.CreateGraphics())
                      {
                         g.DrawRectangle(Pens.Green, rect);
                      }
                   }
                }
             }
          }

    Encoder(二维码生成):

    (待续)

    WebCam(网络摄像头采样读码):

    private void btnDecodeWebCam_Click(object sender, EventArgs e)
          {
             if (wCam == null)
             {
                wCam = new WebCam {Container = picWebCam};
    
                wCam.OpenConnection();
    
                webCamTimer = new Timer();
                webCamTimer.Tick += webCamTimer_Tick;
                webCamTimer.Interval = 200; // Image derivation interval
                webCamTimer.Start();
    
                btnDecodeWebCam.Text = "Decoding..."; // Update UI
             }
             else
             {
                webCamTimer.Stop();
                webCamTimer = null;
                wCam.Dispose();
                wCam = null;
    
                btnDecodeWebCam.Text = "Decode"; // Update UI
             }
          }
    
          void webCamTimer_Tick(object sender, EventArgs e)
          {
             var bitmap = wCam.GetCurrentImage(); // Derive a imaghe
             if (bitmap == null)
                return;
             Console.WriteLine("Bitmap width is:{0}, height is{1}. Camera is: {2} mega-pixel.", bitmap.Width, bitmap.Height, bitmap.Width* bitmap.Height/10000);
             var reader = new BarcodeReader();
             var result = reader.Decode(bitmap); // Decode the image
             if (result != null)
             {
                txtTypeWebCam.Text = result.BarcodeFormat.ToString();
                txtContentWebCam.Text = result.Text;
             }
          }

    其中WebCam对象定义的各类对摄像头的参数设置和操作详见WebCam.cs。

    [源码工程分析]

    1.图像解码(Qrcode为例)

    Qrcode解码流程为检测定位->解码,涉及的几个主要文件为:BarcodeReader.cs(createBinarizer)->BarcodeReaderGeneric.cs(createBinarizer)->HybridBinarizer.cs(createBinarizer)、QRCodeReader.cs,Detector.cs和FinderPatternFinder.cs,Decoder.cs。

    HybridBinarizer.cs(createBinarizer)类实现位图的二值化处理,核心代码段为:

    /// <summary>
          /// Calculates the final BitMatrix once for all requests. This could be called once from the
          /// constructor instead, but there are some advantages to doing it lazily, such as making
          /// profiling easier, and not doing heavy lifting when callers don't expect it.
          /// </summary>
          private void binarizeEntireImage()
          {
             if (matrix == null)
             {
                LuminanceSource source = LuminanceSource;
                int width = source.Width;
                int height = source.Height;
                if (width >= MINIMUM_DIMENSION && height >= MINIMUM_DIMENSION)
                {
                   byte[] luminances = source.Matrix;
    
                   int subWidth = width >> BLOCK_SIZE_POWER;
                   if ((width & BLOCK_SIZE_MASK) != 0)
                   {
                      subWidth++;
                   }
                   int subHeight = height >> BLOCK_SIZE_POWER;
                   if ((height & BLOCK_SIZE_MASK) != 0)
                   {
                      subHeight++;
                   }
                   int[][] blackPoints = calculateBlackPoints(luminances, subWidth, subHeight, width, height);
    
                   var newMatrix = new BitMatrix(width, height);
                   calculateThresholdForBlock(luminances, subWidth, subHeight, width, height, blackPoints, newMatrix);
                   matrix = newMatrix;
                }
                else
                {
                   // If the image is too small, fall back to the global histogram approach.
                   matrix = base.BlackMatrix;
                }
             }
          }
    
          /// <summary>
          /// For each 8x8 block in the image, calculate the average black point using a 5x5 grid
          /// of the blocks around it. Also handles the corner cases (fractional blocks are computed based
          /// on the last 8 pixels in the row/column which are also used in the previous block).
          /// PS(Jay):This algrithm has big issue!!! Should be enhanced!!!
          /// </summary>
          /// <param name="luminances">The luminances.</param>
          /// <param name="subWidth">Width of the sub.</param>
          /// <param name="subHeight">Height of the sub.</param>
          /// <param name="width">The width.</param>
          /// <param name="height">The height.</param>
          /// <param name="blackPoints">The black points.</param>
          /// <param name="matrix">The matrix.</param>
          private static void calculateThresholdForBlock(byte[] luminances, int subWidth, int subHeight, int width, int height, int[][] blackPoints, BitMatrix matrix)
          {
             for (int y = 0; y < subHeight; y++)
             {
                int yoffset = y << BLOCK_SIZE_POWER;
                int maxYOffset = height - BLOCK_SIZE;
                if (yoffset > maxYOffset)
                {
                   yoffset = maxYOffset;
                }
                for (int x = 0; x < subWidth; x++)
                {
                   int xoffset = x << BLOCK_SIZE_POWER;
                   int maxXOffset = width - BLOCK_SIZE;
                   if (xoffset > maxXOffset)
                   {
                      xoffset = maxXOffset;
                   }
                   int left = cap(x, 2, subWidth - 3);
                   int top = cap(y, 2, subHeight - 3);
                   int sum = 0;
                   for (int z = -2; z <= 2; z++)
                   {
                      int[] blackRow = blackPoints[top + z];
                      sum += blackRow[left - 2];
                      sum += blackRow[left - 1];
                      sum += blackRow[left];
                      sum += blackRow[left + 1];
                      sum += blackRow[left + 2];
                   }
                   int average = sum / 25;
                   thresholdBlock(luminances, xoffset, yoffset, average, width, matrix);
                }
             }
          }
    
          private static int cap(int value, int min, int max)
          {
             return value < min ? min : value > max ? max : value;
          }
    
          /// <summary>
          /// Applies a single threshold to an 8x8 block of pixels.
          /// </summary>
          /// <param name="luminances">The luminances.</param>
          /// <param name="xoffset">The xoffset.</param>
          /// <param name="yoffset">The yoffset.</param>
          /// <param name="threshold">The threshold.</param>
          /// <param name="stride">The stride.</param>
          /// <param name="matrix">The matrix.</param>
          private static void thresholdBlock(byte[] luminances, int xoffset, int yoffset, int threshold, int stride, BitMatrix matrix)
          {
             int offset = (yoffset * stride) + xoffset;
             for (int y = 0; y < BLOCK_SIZE; y++, offset += stride)
             {
                for (int x = 0; x < BLOCK_SIZE; x++)
                {
                   int pixel = luminances[offset + x] & 0xff;
                   // Comparison needs to be <=, so that black == 0 pixels are black, even if the threshold is 0.
                   matrix[xoffset + x, yoffset + y] = (pixel <= threshold);
                }
             }
          }
    
          /// <summary>
          /// Calculates a single black point for each 8x8 block of pixels and saves it away.
          /// See the following thread for a discussion of this algorithm:
          /// http://groups.google.com/group/zxing/browse_thread/thread/d06efa2c35a7ddc0
          /// </summary>
          /// <param name="luminances">The luminances.</param>
          /// <param name="subWidth">Width of the sub.</param>
          /// <param name="subHeight">Height of the sub.</param>
          /// <param name="width">The width.</param>
          /// <param name="height">The height.</param>
          /// <returns></returns>
          private static int[][] calculateBlackPoints(byte[] luminances, int subWidth, int subHeight, int width, int height)
          {
             int[][] blackPoints = new int[subHeight][];
             for (int i = 0; i < subHeight; i++)
             {
                blackPoints[i] = new int[subWidth];
             }
    
             for (int y = 0; y < subHeight; y++)
             {
                int yoffset = y << BLOCK_SIZE_POWER;
                int maxYOffset = height - BLOCK_SIZE;
                if (yoffset > maxYOffset)
                {
                   yoffset = maxYOffset;
                }
                for (int x = 0; x < subWidth; x++)
                {
                   int xoffset = x << BLOCK_SIZE_POWER;
                   int maxXOffset = width - BLOCK_SIZE;
                   if (xoffset > maxXOffset)
                   {
                      xoffset = maxXOffset;
                   }
                   int sum = 0;
                   int min = 0xFF;
                   int max = 0;
                   for (int yy = 0, offset = yoffset * width + xoffset; yy < BLOCK_SIZE; yy++, offset += width)
                   {
                      for (int xx = 0; xx < BLOCK_SIZE; xx++)
                      {
                         int pixel = luminances[offset + xx] & 0xFF;
                         // still looking for good contrast
                         sum += pixel;
                         if (pixel < min)
                         {
                            min = pixel;
                         }
                         if (pixel > max)
                         {
                            max = pixel;
                         }
                      }
                      // short-circuit min/max tests once dynamic range is met
                      if (max - min > MIN_DYNAMIC_RANGE)
                      {
                         // finish the rest of the rows quickly
                         for (yy++, offset += width; yy < BLOCK_SIZE; yy++, offset += width)
                         {
                            for (int xx = 0; xx < BLOCK_SIZE; xx++)
                            {
                               sum += luminances[offset + xx] & 0xFF;
                            }
                         }
                      }
                   }
    
                   // The default estimate is the average of the values in the block.
                   int average = sum >> (BLOCK_SIZE_POWER * 2);
                   if (max - min <= MIN_DYNAMIC_RANGE)
                   {
                      // If variation within the block is low, assume this is a block with only light or only
                      // dark pixels. In that case we do not want to use the average, as it would divide this
                      // low contrast area into black and white pixels, essentially creating data out of noise.
                      //
                      // The default assumption is that the block is light/background. Since no estimate for
                      // the level of dark pixels exists locally, use half the min for the block.
                      average = min >> 1;
    
                      if (y > 0 && x > 0)
                      {
                         // Correct the "white background" assumption for blocks that have neighbors by comparing
                         // the pixels in this block to the previously calculated black points. This is based on
                         // the fact that dark barcode symbology is always surrounded by some amount of light
                         // background for which reasonable black point estimates were made. The bp estimated at
                         // the boundaries is used for the interior.
    
                         // The (min < bp) is arbitrary but works better than other heuristics that were tried.
                         int averageNeighborBlackPoint = (blackPoints[y - 1][x] + (2 * blackPoints[y][x - 1]) +
                             blackPoints[y - 1][x - 1]) >> 2;
                         if (min < averageNeighborBlackPoint)
                         {
                            average = averageNeighborBlackPoint;
                         }
                      }
                   }
                   blackPoints[y][x] = average;
                }
             }
             return blackPoints;
          }

    这一段算法有存在改进的必要。在HybridBinarizer继承的GlobalHistogramBinarizer类中,是从图像中均匀取5行(覆盖整个图像高度),每行取中间五分之四作为样本;以灰度值为X轴,每个灰度值的像素个数为Y轴建立一个直方图,从直方图中取点数最多的一个灰度值,然后再去给其他的灰度值进行分数计算,按照点数乘以与最多点数灰度值的距离的平方来进行打分,选分数最高的一个灰度值。接下来在这两个灰度值中间选取一个区分界限(这两个点灰度值大的是偏白色的点,灰度值小的是偏黑色的点),取的原则是尽量靠近灰度值大的点(偏白色的点)、并且要点数越少越好。界限有了以后就容易了,与整幅图像的每个点进行比较,如果灰度值比界限小的就是黑,在新的矩阵中将该点置1,其余的就是白,为0。此部分具体代码见GlobalHistogramBinarizer类的BlackMatrix()重写方法。这个算法的劣势是由于是全局计算阈值点,所以应对局部阴影不太理想(However, because it picks a global black point, it cannot handle difficult shadows and gradients.)。

    QRCodeReader类实现了接口Reader,核心段代码为:

    /// <summary>
          /// Locates and decodes a barcode in some format within an image. This method also accepts
          /// hints, each possibly associated to some data, which may help the implementation decode.
          /// </summary>
          /// <param name="image">image of barcode to decode</param>
          /// <param name="hints">passed as a <see cref="IDictionary{TKey, TValue}"/> from <see cref="DecodeHintType"/>
          /// to arbitrary data. The
          /// meaning of the data depends upon the hint type. The implementation may or may not do
          /// anything with these hints.</param>
          /// <returns>
          /// String which the barcode encodes
          /// </returns>
          public Result decode(BinaryBitmap image, IDictionary<DecodeHintType, object> hints)
          {
             DecoderResult decoderResult;
             ResultPoint[] points;
             if (image == null || image.BlackMatrix == null)
             {
                // something is wrong with the image
                return null;
             }
             if (hints != null && hints.ContainsKey(DecodeHintType.PURE_BARCODE)) // 纯barcode图片
             {
                var bits = extractPureBits(image.BlackMatrix);
                if (bits == null)
                   return null;
                decoderResult = decoder.decode(bits, hints);
                points = NO_POINTS;
             }
             else
             {
                var detectorResult = new Detector(image.BlackMatrix).detect(hints); // 检测barcode
                if (detectorResult == null)
                   return null;
                decoderResult = decoder.decode(detectorResult.Bits, hints); // 解码barcode
                points = detectorResult.Points;
             }
             if (decoderResult == null)
                return null;
    
             // If the code was mirrored: swap the bottom-left and the top-right points.
             var data = decoderResult.Other as QRCodeDecoderMetaData;
             if (data != null)
             {
                data.applyMirroredCorrection(points);
             }
    
             var result = new Result(decoderResult.Text, decoderResult.RawBytes, points, BarcodeFormat.QR_CODE);
             var byteSegments = decoderResult.ByteSegments;
             if (byteSegments != null)
             {
                result.putMetadata(ResultMetadataType.BYTE_SEGMENTS, byteSegments);
             }
             var ecLevel = decoderResult.ECLevel;
             if (ecLevel != null)
             {
                result.putMetadata(ResultMetadataType.ERROR_CORRECTION_LEVEL, ecLevel);
             }
             if (decoderResult.StructuredAppend)
             {
                result.putMetadata(ResultMetadataType.STRUCTURED_APPEND_SEQUENCE, decoderResult.StructuredAppendSequenceNumber);
                result.putMetadata(ResultMetadataType.STRUCTURED_APPEND_PARITY, decoderResult.StructuredAppendParity);
             }
             return result;
          }

    qrcode->detector目录下的Detector类:

    namespace ZXing.QrCode.Internal
    {
       /// <summary>
       /// <p>Encapsulates logic that can detect a QR Code in an image, even if the QR Code
       /// is rotated or skewed, or partially obscured.</p>
       /// </summary>
       /// <author>Sean Owen</author>
       public class Detector
       {
          private readonly BitMatrix image;
          private ResultPointCallback resultPointCallback;
    
          /// <summary>
          /// Initializes a new instance of the <see cref="Detector"/> class.
          /// </summary>
          /// <param name="image">The image.</param>
          public Detector(BitMatrix image)
          {
             this.image = image;
          }
    
          /// <summary>
          /// Gets the image.
          /// </summary>
          virtual protected internal BitMatrix Image
          {
             get
             {
                return image;
             }
          }
    
          /// <summary>
          /// Gets the result point callback.
          /// </summary>
          virtual protected internal ResultPointCallback ResultPointCallback
          {
             get
             {
                return resultPointCallback;
             }
          }
    
          /// <summary>
          ///   <p>Detects a QR Code in an image, simply.</p>
          /// </summary>
          /// <returns>
          ///   <see cref="DetectorResult"/> encapsulating results of detecting a QR Code
          /// </returns>
          public virtual DetectorResult detect()
          {
             return detect(null);
          }
    
          /// <summary>
          ///   <p>Detects a QR Code in an image, simply.</p>
          /// </summary>
          /// <param name="hints">optional hints to detector</param>
          /// <returns>
          ///   <see cref="DetectorResult"/> encapsulating results of detecting a QR Code
          /// </returns>
          public virtual DetectorResult detect(IDictionary<DecodeHintType, object> hints)
          {
             resultPointCallback = hints == null || !hints.ContainsKey(DecodeHintType.NEED_RESULT_POINT_CALLBACK) ? null : (ResultPointCallback)hints[DecodeHintType.NEED_RESULT_POINT_CALLBACK];
    
             FinderPatternFinder finder = new FinderPatternFinder(image, resultPointCallback);
             FinderPatternInfo info = finder.find(hints);
             if (info == null)
                return null;
    
             return processFinderPatternInfo(info);
          }
    
          /// <summary>
          /// Processes the finder pattern info.
          /// </summary>
          /// <param name="info">The info.</param>
          /// <returns></returns>
          protected internal virtual DetectorResult processFinderPatternInfo(FinderPatternInfo info)
          {
             FinderPattern topLeft = info.TopLeft;
             FinderPattern topRight = info.TopRight;
             FinderPattern bottomLeft = info.BottomLeft;
    
             float moduleSize = calculateModuleSize(topLeft, topRight, bottomLeft);
             if (moduleSize < 1.0f)
             {
                return null;
             }
             int dimension;
             if (!computeDimension(topLeft, topRight, bottomLeft, moduleSize, out dimension))
                return null;
             Internal.Version provisionalVersion = Internal.Version.getProvisionalVersionForDimension(dimension);
             if (provisionalVersion == null)
                return null;
             int modulesBetweenFPCenters = provisionalVersion.DimensionForVersion - 7;
    
             AlignmentPattern alignmentPattern = null;
             // Anything above version 1 has an alignment pattern
             if (provisionalVersion.AlignmentPatternCenters.Length > 0)
             {
    
                // Guess where a "bottom right" finder pattern would have been
                float bottomRightX = topRight.X - topLeft.X + bottomLeft.X;
                float bottomRightY = topRight.Y - topLeft.Y + bottomLeft.Y;
    
                // Estimate that alignment pattern is closer by 3 modules
                // from "bottom right" to known top left location
                //UPGRADE_WARNING: Data types in Visual C# might be different.  Verify the accuracy of narrowing conversions. "ms-help://MS.VSCC.v80/dv_commoner/local/redirect.htm?index='!DefaultContextWindowIndex'&keyword='jlca1042'"
                float correctionToTopLeft = 1.0f - 3.0f / (float)modulesBetweenFPCenters;
                //UPGRADE_WARNING: Data types in Visual C# might be different.  Verify the accuracy of narrowing conversions. "ms-help://MS.VSCC.v80/dv_commoner/local/redirect.htm?index='!DefaultContextWindowIndex'&keyword='jlca1042'"
                int estAlignmentX = (int)(topLeft.X + correctionToTopLeft * (bottomRightX - topLeft.X));
                //UPGRADE_WARNING: Data types in Visual C# might be different.  Verify the accuracy of narrowing conversions. "ms-help://MS.VSCC.v80/dv_commoner/local/redirect.htm?index='!DefaultContextWindowIndex'&keyword='jlca1042'"
                int estAlignmentY = (int)(topLeft.Y + correctionToTopLeft * (bottomRightY - topLeft.Y));
    
                // Kind of arbitrary -- expand search radius before giving up
                for (int i = 4; i <= 16; i <<= 1)
                {
                   alignmentPattern = findAlignmentInRegion(moduleSize, estAlignmentX, estAlignmentY, (float)i);
                   if (alignmentPattern == null)
                      continue;
                   break;
                }
                // If we didn't find alignment pattern... well try anyway without it
             }
    
             PerspectiveTransform transform = createTransform(topLeft, topRight, bottomLeft, alignmentPattern, dimension);
    
             BitMatrix bits = sampleGrid(image, transform, dimension);
             if (bits == null)
                return null;
    
             ResultPoint[] points;
             if (alignmentPattern == null)
             {
                points = new ResultPoint[] { bottomLeft, topLeft, topRight };
             }
             else
             {
                points = new ResultPoint[] { bottomLeft, topLeft, topRight, alignmentPattern };
             }
             return new DetectorResult(bits, points);
          }
    
          private static PerspectiveTransform createTransform(ResultPoint topLeft, ResultPoint topRight, ResultPoint bottomLeft, ResultPoint alignmentPattern, int dimension)
          {
             //UPGRADE_WARNING: Data types in Visual C# might be different.  Verify the accuracy of narrowing conversions. "ms-help://MS.VSCC.v80/dv_commoner/local/redirect.htm?index='!DefaultContextWindowIndex'&keyword='jlca1042'"
             float dimMinusThree = (float)dimension - 3.5f;
             float bottomRightX;
             float bottomRightY;
             float sourceBottomRightX;
             float sourceBottomRightY;
             if (alignmentPattern != null)
             {
                bottomRightX = alignmentPattern.X;
                bottomRightY = alignmentPattern.Y;
                sourceBottomRightX = sourceBottomRightY = dimMinusThree - 3.0f;
             }
             else
             {
                // Don't have an alignment pattern, just make up the bottom-right point
                bottomRightX = (topRight.X - topLeft.X) + bottomLeft.X;
                bottomRightY = (topRight.Y - topLeft.Y) + bottomLeft.Y;
                sourceBottomRightX = sourceBottomRightY = dimMinusThree;
             }
    
             return PerspectiveTransform.quadrilateralToQuadrilateral(
                3.5f,
                3.5f,
                dimMinusThree,
                3.5f,
                sourceBottomRightX,
                sourceBottomRightY,
                3.5f,
                dimMinusThree,
                topLeft.X,
                topLeft.Y,
                topRight.X,
                topRight.Y,
                bottomRightX,
                bottomRightY,
                bottomLeft.X,
                bottomLeft.Y);
          }
    
          private static BitMatrix sampleGrid(BitMatrix image, PerspectiveTransform transform, int dimension)
          {
             GridSampler sampler = GridSampler.Instance;
             return sampler.sampleGrid(image, dimension, dimension, transform);
          }
    
          /// <summary> <p>Computes the dimension (number of modules on a size) of the QR Code based on the position
          /// of the finder patterns and estimated module size.</p>
          /// </summary>
          private static bool computeDimension(ResultPoint topLeft, ResultPoint topRight, ResultPoint bottomLeft, float moduleSize, out int dimension)
          {
             int tltrCentersDimension = MathUtils.round(ResultPoint.distance(topLeft, topRight) / moduleSize);
             int tlblCentersDimension = MathUtils.round(ResultPoint.distance(topLeft, bottomLeft) / moduleSize);
             dimension = ((tltrCentersDimension + tlblCentersDimension) >> 1) + 7;
             switch (dimension & 0x03)
             {
                // mod 4
                case 0:
                   dimension++;
                   break;
                // 1? do nothing
                case 2:
                   dimension--;
                   break;
                case 3:
                   return true;
             }
             return true;
          }
    
          /// <summary> <p>Computes an average estimated module size based on estimated derived from the positions
          /// of the three finder patterns.</p>
          /// </summary>
          protected internal virtual float calculateModuleSize(ResultPoint topLeft, ResultPoint topRight, ResultPoint bottomLeft)
          {
             // Take the average
             return (calculateModuleSizeOneWay(topLeft, topRight) + calculateModuleSizeOneWay(topLeft, bottomLeft)) / 2.0f;
          }
    
          /// <summary> <p>Estimates module size based on two finder patterns -- it uses
          /// {@link #sizeOfBlackWhiteBlackRunBothWays(int, int, int, int)} to figure the
          /// width of each, measuring along the axis between their centers.</p>
          /// </summary>
          private float calculateModuleSizeOneWay(ResultPoint pattern, ResultPoint otherPattern)
          {
             //UPGRADE_WARNING: Data types in Visual C# might be different.  Verify the accuracy of narrowing conversions. "ms-help://MS.VSCC.v80/dv_commoner/local/redirect.htm?index='!DefaultContextWindowIndex'&keyword='jlca1042'"
             float moduleSizeEst1 = sizeOfBlackWhiteBlackRunBothWays((int)pattern.X, (int)pattern.Y, (int)otherPattern.X, (int)otherPattern.Y);
             //UPGRADE_WARNING: Data types in Visual C# might be different.  Verify the accuracy of narrowing conversions. "ms-help://MS.VSCC.v80/dv_commoner/local/redirect.htm?index='!DefaultContextWindowIndex'&keyword='jlca1042'"
             float moduleSizeEst2 = sizeOfBlackWhiteBlackRunBothWays((int)otherPattern.X, (int)otherPattern.Y, (int)pattern.X, (int)pattern.Y);
             if (Single.IsNaN(moduleSizeEst1))
             {
                return moduleSizeEst2 / 7.0f;
             }
             if (Single.IsNaN(moduleSizeEst2))
             {
                return moduleSizeEst1 / 7.0f;
             }
             // Average them, and divide by 7 since we've counted the width of 3 black modules,
             // and 1 white and 1 black module on either side. Ergo, divide sum by 14.
             return (moduleSizeEst1 + moduleSizeEst2) / 14.0f;
          }
    
          /// <summary> See {@link #sizeOfBlackWhiteBlackRun(int, int, int, int)}; computes the total width of
          /// a finder pattern by looking for a black-white-black run from the center in the direction
          /// of another point (another finder pattern center), and in the opposite direction too.
          /// </summary>
          private float sizeOfBlackWhiteBlackRunBothWays(int fromX, int fromY, int toX, int toY)
          {
    
             float result = sizeOfBlackWhiteBlackRun(fromX, fromY, toX, toY);
    
             // Now count other way -- don't run off image though of course
             float scale = 1.0f;
             int otherToX = fromX - (toX - fromX);
             if (otherToX < 0)
             {
                //UPGRADE_WARNING: Data types in Visual C# might be different.  Verify the accuracy of narrowing conversions. "ms-help://MS.VSCC.v80/dv_commoner/local/redirect.htm?index='!DefaultContextWindowIndex'&keyword='jlca1042'"
                scale = (float)fromX / (float)(fromX - otherToX);
                otherToX = 0;
             }
             else if (otherToX >= image.Width)
             {
                //UPGRADE_WARNING: Data types in Visual C# might be different.  Verify the accuracy of narrowing conversions. "ms-help://MS.VSCC.v80/dv_commoner/local/redirect.htm?index='!DefaultContextWindowIndex'&keyword='jlca1042'"
                scale = (float)(image.Width - 1 - fromX) / (float)(otherToX - fromX);
                otherToX = image.Width - 1;
             }
             //UPGRADE_WARNING: Data types in Visual C# might be different.  Verify the accuracy of narrowing conversions. "ms-help://MS.VSCC.v80/dv_commoner/local/redirect.htm?index='!DefaultContextWindowIndex'&keyword='jlca1042'"
             int otherToY = (int)(fromY - (toY - fromY) * scale);
    
             scale = 1.0f;
             if (otherToY < 0)
             {
                //UPGRADE_WARNING: Data types in Visual C# might be different.  Verify the accuracy of narrowing conversions. "ms-help://MS.VSCC.v80/dv_commoner/local/redirect.htm?index='!DefaultContextWindowIndex'&keyword='jlca1042'"
                scale = (float)fromY / (float)(fromY - otherToY);
                otherToY = 0;
             }
             else if (otherToY >= image.Height)
             {
                //UPGRADE_WARNING: Data types in Visual C# might be different.  Verify the accuracy of narrowing conversions. "ms-help://MS.VSCC.v80/dv_commoner/local/redirect.htm?index='!DefaultContextWindowIndex'&keyword='jlca1042'"
                scale = (float)(image.Height - 1 - fromY) / (float)(otherToY - fromY);
                otherToY = image.Height - 1;
             }
             //UPGRADE_WARNING: Data types in Visual C# might be different.  Verify the accuracy of narrowing conversions. "ms-help://MS.VSCC.v80/dv_commoner/local/redirect.htm?index='!DefaultContextWindowIndex'&keyword='jlca1042'"
             otherToX = (int)(fromX + (otherToX - fromX) * scale);
    
             result += sizeOfBlackWhiteBlackRun(fromX, fromY, otherToX, otherToY);
             return result - 1.0f; // -1 because we counted the middle pixel twice
          }
    
          /// <summary> <p>This method traces a line from a point in the image, in the direction towards another point.
          /// It begins in a black region, and keeps going until it finds white, then black, then white again.
          /// It reports the distance from the start to this point.</p>
          /// 
          /// <p>This is used when figuring out how wide a finder pattern is, when the finder pattern
          /// may be skewed or rotated.</p>
          /// </summary>
          private float sizeOfBlackWhiteBlackRun(int fromX, int fromY, int toX, int toY)
          {
             // Mild variant of Bresenham's algorithm;
             // see http://en.wikipedia.org/wiki/Bresenham's_line_algorithm
             bool steep = Math.Abs(toY - fromY) > Math.Abs(toX - fromX);
             if (steep)
             {
                int temp = fromX;
                fromX = fromY;
                fromY = temp;
                temp = toX;
                toX = toY;
                toY = temp;
             }
    
             int dx = Math.Abs(toX - fromX);
             int dy = Math.Abs(toY - fromY);
             int error = -dx >> 1;
             int xstep = fromX < toX ? 1 : -1;
             int ystep = fromY < toY ? 1 : -1;
    
             // In black pixels, looking for white, first or second time.
             int state = 0;
             // Loop up until x == toX, but not beyond
             int xLimit = toX + xstep;
             for (int x = fromX, y = fromY; x != xLimit; x += xstep)
             {
                int realX = steep ? y : x;
                int realY = steep ? x : y;
    
                // Does current pixel mean we have moved white to black or vice versa?
                // Scanning black in state 0,2 and white in state 1, so if we find the wrong
                // color, advance to next state or end if we are in state 2 already
                if ((state == 1) == image[realX, realY])
                {
                   if (state == 2)
                   {
                      return MathUtils.distance(x, y, fromX, fromY);
                   }
                   state++;
                }
                error += dy;
                if (error > 0)
                {
                   if (y == toY)
                   {
    
    
                      break;
                   }
                   y += ystep;
                   error -= dx;
                }
             }
             // Found black-white-black; give the benefit of the doubt that the next pixel outside the image
             // is "white" so this last point at (toX+xStep,toY) is the right ending. This is really a
             // small approximation; (toX+xStep,toY+yStep) might be really correct. Ignore this.
             if (state == 2)
             {
                return MathUtils.distance(toX + xstep, toY, fromX, fromY);
             }
             // else we didn't find even black-white-black; no estimate is really possible
             return Single.NaN;
    
          }
    
          /// <summary>
          ///   <p>Attempts to locate an alignment pattern in a limited region of the image, which is
          /// guessed to contain it. This method uses {@link AlignmentPattern}.</p>
          /// </summary>
          /// <param name="overallEstModuleSize">estimated module size so far</param>
          /// <param name="estAlignmentX">x coordinate of center of area probably containing alignment pattern</param>
          /// <param name="estAlignmentY">y coordinate of above</param>
          /// <param name="allowanceFactor">number of pixels in all directions to search from the center</param>
          /// <returns>
          ///   <see cref="AlignmentPattern"/> if found, or null otherwise
          /// </returns>
          protected AlignmentPattern findAlignmentInRegion(float overallEstModuleSize, int estAlignmentX, int estAlignmentY, float allowanceFactor)
          {
             // Look for an alignment pattern (3 modules in size) around where it
             // should be
             //UPGRADE_WARNING: Data types in Visual C# might be different.  Verify the accuracy of narrowing conversions. "ms-help://MS.VSCC.v80/dv_commoner/local/redirect.htm?index='!DefaultContextWindowIndex'&keyword='jlca1042'"
             int allowance = (int)(allowanceFactor * overallEstModuleSize);
             int alignmentAreaLeftX = Math.Max(0, estAlignmentX - allowance);
             int alignmentAreaRightX = Math.Min(image.Width - 1, estAlignmentX + allowance);
             if (alignmentAreaRightX - alignmentAreaLeftX < overallEstModuleSize * 3)
             {
                return null;
             }
    
             int alignmentAreaTopY = Math.Max(0, estAlignmentY - allowance);
             int alignmentAreaBottomY = Math.Min(image.Height - 1, estAlignmentY + allowance);
    
             var alignmentFinder = new AlignmentPatternFinder(
                image,
                alignmentAreaLeftX,
                alignmentAreaTopY,
                alignmentAreaRightX - alignmentAreaLeftX,
                alignmentAreaBottomY - alignmentAreaTopY,
                overallEstModuleSize,
                resultPointCallback);
    
             return alignmentFinder.find();
          }
       }
    }

    qrcode->detector目录下的FinderPatternFinder类:

  • 相关阅读:
    js中连续赋值
    PHP文件上传漏洞原理以及如何防御详解 【转】
    奇虎360PHP工程师2019年面试题和答案解析【转】
    学习ES6碰到的单词
    Proxy(代理)例子
    for of 迭代器 生成器 数组的一些杂乱记录
    async和await
    Promise.call
    ES6中Promise.race
    Promise 和.then
  • 原文地址:https://www.cnblogs.com/jayhust/p/8376990.html
Copyright © 2020-2023  润新知