• [LeetCode] 230. Kth Smallest Element in a BST(二叉搜索树里的第 k 小元素)


    Description

    Given a binary search tree, write a function kthSmallest to find the kth smallest element in it.
    给一棵二叉搜索树,写一个函数 kthSmallest 找出其中第 k 小的元素。

    Examples

    Example 1

    Input: root = [3,1,4,null,2], k = 1
       3
      / 
     1   4
      
       2
    Output: 1
    

    Example 2

    Input: root = [5,3,6,2,4,null,null,1], k = 3
           5
          / 
         3   6
        / 
       2   4
      /
     1
    Output: 3
    

    Follow up

    What if the BST is modified (insert/delete operations) often and you need to find the kth smallest frequently? How would you optimize the kthSmallest routine?
    如果二叉搜索树经常被修改(插入/删除节点)并且你又需要经常查询第 k 小元素呢?你该怎样优化你的代码?

    Constraints

    • The number of elements of the BST is between 1 to 10^4.
      二叉搜索树的节点范围在 110^4 之间。

    • You may assume k is always valid, 1 ≤ k ≤ BST's total elements.
      你可以假设 k 总是有效的,1 ≤ k ≤ 二叉搜索树的节点数量

    Hints

    1. Try to utilize the property of a BST.
      尝试利用二叉搜索树的性质。

    2. Try in-order traversal. (Credits to @chan13)
      尝试中序遍历。

    3. What if you could modify the BST node's structure?
      如果你能修改二叉搜索树节点的结构呢?

    4. The optimal runtime complexity is O(height of BST).
      更优的时间复杂度是 O(二叉搜索树的高度)。

    Solution

    由于对二叉树进行中序遍历即可得到有序数组,所以可以利用中序遍历来寻找第 k 小的元素,时间复杂度 (O(N)),代码如下:

    /**
     * Example:
     * var ti = TreeNode(5)
     * var v = ti.`val`
     * Definition for a binary tree node.
     * class TreeNode(var `val`: Int) {
     *     var left: TreeNode? = null
     *     var right: TreeNode? = null
     * }
     */
    class Solution {
        private var index = 0
        private var result = -1
    
        fun kthSmallest(root: TreeNode?, k: Int): Int {
            inOrder(root, k)
            return result
        }
    
        private fun inOrder(root: TreeNode?, k: Int) {
            if (index >= k && result != -1) {
                return
            }
            if (root != null) {
                inOrder(root.left, k)
                index++
                if (index == k) {
                    result = root.`val`
                    return
                }
                inOrder(root.right, k)
            }
        }
    }
    

    既然二叉搜索树有这种性质,为什么不能用二分搜索呢?翻了一下 Discussion,确实有这样的解法,代码如下:

    /**
     * Example:
     * var ti = TreeNode(5)
     * var v = ti.`val`
     * Definition for a binary tree node.
     * class TreeNode(var `val`: Int) {
     *     var left: TreeNode? = null
     *     var right: TreeNode? = null
     * }
     */
    class Solution {
        fun kthSmallest(root: TreeNode?, k: Int): Int {
            val leftNodes = nodeCount(root?.left)
    
            return when {
                k <= leftNodes -> kthSmallest(root?.left, k)
                k > leftNodes + 1 -> kthSmallest(root?.right, k - 1 - leftNodes)
                else -> root!!.`val`
            }
        }
    
        private fun nodeCount(root: TreeNode?): Int = if (root == null) {
            0
        } else {
            1 + nodeCount(root.left) + nodeCount(root.right)
        }
    }
    

    不过这样算节点的数量也是要消耗额外时间的,所以需要将其存储起来,这也就是提示 3 里的内容。

    官方给出的 Solution 则是利用栈这一结构,将递归查找变成了迭代,代码如下(虽然我看不出这有啥优势):

    /**
     * Example:
     * var ti = TreeNode(5)
     * var v = ti.`val`
     * Definition for a binary tree node.
     * class TreeNode(var `val`: Int) {
     *     var left: TreeNode? = null
     *     var right: TreeNode? = null
     * }
     */
    class Solution {
        fun kthSmallest(root: TreeNode?, k: Int): Int {
            val stack = ArrayDeque<TreeNode>()
    
            var p = root
            var count = k
            while (true) {
                while (p != null) {
                    stack.push(p)
                    p = p.left
                }
                p = stack.pop()
                if (--count == 0) {
                    return p.`val`
                }
                p = p.right
            }
        }
    }
    
  • 相关阅读:
    记一次gogs迁徙
    Spark集群模式安装
    Spark单机模式安装
    SparkSQL入门
    SparkSql API
    Spark和HBase整合
    SparkStreaming与Kafka整合
    SparkStreaming基础案例
    Spark 自定义分区及区内二次排序demo
    Sqoop安装及指令
  • 原文地址:https://www.cnblogs.com/zhongju/p/13905179.html
Copyright © 2020-2023  润新知